00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2027 00:00:00.000 originally caused by: 00:00:00.001 Started by user Latecki, Karol 00:00:00.034 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.036 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.082 Fetching changes from the remote Git repository 00:00:00.084 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.129 Using shallow fetch with depth 1 00:00:00.129 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.129 > git --version # timeout=10 00:00:00.178 > git --version # 'git version 2.39.2' 00:00:00.179 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.203 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.204 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:04.634 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.653 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.669 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:04.669 > git config core.sparsecheckout # timeout=10 00:00:04.683 > git read-tree -mu HEAD # timeout=10 00:00:04.704 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:04.728 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:04.728 > git rev-list --no-walk e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=10 00:00:04.959 [Pipeline] Start of Pipeline 00:00:04.976 [Pipeline] library 00:00:04.978 Loading library shm_lib@master 00:00:04.978 Library shm_lib@master is cached. Copying from home. 00:00:04.994 [Pipeline] node 00:00:19.997 Still waiting to schedule task 00:00:19.998 Waiting for next available executor on ‘vagrant-vm-host’ 00:16:16.700 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:16:16.702 [Pipeline] { 00:16:16.716 [Pipeline] catchError 00:16:16.717 [Pipeline] { 00:16:16.735 [Pipeline] wrap 00:16:16.747 [Pipeline] { 00:16:16.757 [Pipeline] stage 00:16:16.759 [Pipeline] { (Prologue) 00:16:16.783 [Pipeline] echo 00:16:16.785 Node: VM-host-SM9 00:16:16.791 [Pipeline] cleanWs 00:16:16.801 [WS-CLEANUP] Deleting project workspace... 00:16:16.801 [WS-CLEANUP] Deferred wipeout is used... 00:16:16.807 [WS-CLEANUP] done 00:16:17.043 [Pipeline] setCustomBuildProperty 00:16:17.142 [Pipeline] httpRequest 00:16:17.162 [Pipeline] echo 00:16:17.164 Sorcerer 10.211.164.101 is alive 00:16:17.171 [Pipeline] httpRequest 00:16:17.175 HttpMethod: GET 00:16:17.176 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:16:17.176 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:16:17.178 Response Code: HTTP/1.1 200 OK 00:16:17.179 Success: Status code 200 is in the accepted range: 200,404 00:16:17.180 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:16:17.325 [Pipeline] sh 00:16:17.609 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:16:17.628 [Pipeline] httpRequest 00:16:17.646 [Pipeline] echo 00:16:17.647 Sorcerer 10.211.164.101 is alive 00:16:17.655 [Pipeline] httpRequest 00:16:17.660 HttpMethod: GET 00:16:17.660 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:16:17.661 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:16:17.663 Response Code: HTTP/1.1 200 OK 00:16:17.664 Success: Status code 200 is in the accepted range: 200,404 00:16:17.664 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:16:19.838 [Pipeline] sh 00:16:20.118 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:16:23.426 [Pipeline] sh 00:16:23.705 + git -C spdk log --oneline -n5 00:16:23.705 dbef7efac test: fix dpdk builds on ubuntu24 00:16:23.705 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:16:23.705 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:16:23.705 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:16:23.705 e03c164a1 nvme: add nvme_ctrlr_lock 00:16:23.724 [Pipeline] writeFile 00:16:23.741 [Pipeline] sh 00:16:24.020 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:16:24.032 [Pipeline] sh 00:16:24.311 + cat autorun-spdk.conf 00:16:24.311 SPDK_RUN_FUNCTIONAL_TEST=1 00:16:24.311 SPDK_TEST_NVMF=1 00:16:24.311 SPDK_TEST_NVMF_TRANSPORT=tcp 00:16:24.311 SPDK_TEST_URING=1 00:16:24.311 SPDK_TEST_VFIOUSER=1 00:16:24.311 SPDK_TEST_USDT=1 00:16:24.311 SPDK_RUN_UBSAN=1 00:16:24.311 NET_TYPE=virt 00:16:24.311 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:24.317 RUN_NIGHTLY=1 00:16:24.319 [Pipeline] } 00:16:24.336 [Pipeline] // stage 00:16:24.352 [Pipeline] stage 00:16:24.354 [Pipeline] { (Run VM) 00:16:24.367 [Pipeline] sh 00:16:24.646 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:16:24.646 + echo 'Start stage prepare_nvme.sh' 00:16:24.646 Start stage prepare_nvme.sh 00:16:24.646 + [[ -n 1 ]] 00:16:24.646 + disk_prefix=ex1 00:16:24.646 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:16:24.646 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:16:24.646 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:16:24.646 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:16:24.646 ++ SPDK_TEST_NVMF=1 00:16:24.646 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:16:24.646 ++ SPDK_TEST_URING=1 00:16:24.646 ++ SPDK_TEST_VFIOUSER=1 00:16:24.646 ++ SPDK_TEST_USDT=1 00:16:24.646 ++ SPDK_RUN_UBSAN=1 00:16:24.646 ++ NET_TYPE=virt 00:16:24.646 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:24.646 ++ RUN_NIGHTLY=1 00:16:24.646 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:16:24.646 + nvme_files=() 00:16:24.646 + declare -A nvme_files 00:16:24.646 + backend_dir=/var/lib/libvirt/images/backends 00:16:24.646 + nvme_files['nvme.img']=5G 00:16:24.646 + nvme_files['nvme-cmb.img']=5G 00:16:24.646 + nvme_files['nvme-multi0.img']=4G 00:16:24.646 + nvme_files['nvme-multi1.img']=4G 00:16:24.646 + nvme_files['nvme-multi2.img']=4G 00:16:24.646 + nvme_files['nvme-openstack.img']=8G 00:16:24.646 + nvme_files['nvme-zns.img']=5G 00:16:24.646 + (( SPDK_TEST_NVME_PMR == 1 )) 00:16:24.646 + (( SPDK_TEST_FTL == 1 )) 00:16:24.646 + (( SPDK_TEST_NVME_FDP == 1 )) 00:16:24.646 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:16:24.646 + for nvme in "${!nvme_files[@]}" 00:16:24.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:16:24.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:16:24.646 + for nvme in "${!nvme_files[@]}" 00:16:24.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:16:24.646 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:16:24.646 + for nvme in "${!nvme_files[@]}" 00:16:24.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:16:24.904 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:16:24.904 + for nvme in "${!nvme_files[@]}" 00:16:24.904 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:16:24.904 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:16:24.904 + for nvme in "${!nvme_files[@]}" 00:16:24.904 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:16:24.904 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:16:24.904 + for nvme in "${!nvme_files[@]}" 00:16:24.904 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:16:25.161 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:16:25.161 + for nvme in "${!nvme_files[@]}" 00:16:25.161 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:16:25.161 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:16:25.161 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:16:25.419 + echo 'End stage prepare_nvme.sh' 00:16:25.419 End stage prepare_nvme.sh 00:16:25.430 [Pipeline] sh 00:16:25.709 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:16:25.709 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:16:25.709 00:16:25.709 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:16:25.709 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:16:25.709 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:16:25.709 HELP=0 00:16:25.709 DRY_RUN=0 00:16:25.709 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:16:25.709 NVME_DISKS_TYPE=nvme,nvme, 00:16:25.709 NVME_AUTO_CREATE=0 00:16:25.709 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:16:25.709 NVME_CMB=,, 00:16:25.709 NVME_PMR=,, 00:16:25.709 NVME_ZNS=,, 00:16:25.709 NVME_MS=,, 00:16:25.709 NVME_FDP=,, 00:16:25.709 SPDK_VAGRANT_DISTRO=fedora38 00:16:25.709 SPDK_VAGRANT_VMCPU=10 00:16:25.709 SPDK_VAGRANT_VMRAM=12288 00:16:25.709 SPDK_VAGRANT_PROVIDER=libvirt 00:16:25.709 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:16:25.709 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:16:25.709 SPDK_OPENSTACK_NETWORK=0 00:16:25.709 VAGRANT_PACKAGE_BOX=0 00:16:25.709 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:16:25.709 FORCE_DISTRO=true 00:16:25.709 VAGRANT_BOX_VERSION= 00:16:25.709 EXTRA_VAGRANTFILES= 00:16:25.709 NIC_MODEL=e1000 00:16:25.709 00:16:25.709 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:16:25.709 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:16:29.895 Bringing machine 'default' up with 'libvirt' provider... 00:16:30.154 ==> default: Creating image (snapshot of base box volume). 00:16:30.154 ==> default: Creating domain with the following settings... 00:16:30.154 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721663612_e03c50c1b3e664b50236 00:16:30.154 ==> default: -- Domain type: kvm 00:16:30.154 ==> default: -- Cpus: 10 00:16:30.154 ==> default: -- Feature: acpi 00:16:30.154 ==> default: -- Feature: apic 00:16:30.154 ==> default: -- Feature: pae 00:16:30.154 ==> default: -- Memory: 12288M 00:16:30.154 ==> default: -- Memory Backing: hugepages: 00:16:30.154 ==> default: -- Management MAC: 00:16:30.154 ==> default: -- Loader: 00:16:30.154 ==> default: -- Nvram: 00:16:30.154 ==> default: -- Base box: spdk/fedora38 00:16:30.154 ==> default: -- Storage pool: default 00:16:30.154 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721663612_e03c50c1b3e664b50236.img (20G) 00:16:30.154 ==> default: -- Volume Cache: default 00:16:30.154 ==> default: -- Kernel: 00:16:30.154 ==> default: -- Initrd: 00:16:30.154 ==> default: -- Graphics Type: vnc 00:16:30.154 ==> default: -- Graphics Port: -1 00:16:30.154 ==> default: -- Graphics IP: 127.0.0.1 00:16:30.154 ==> default: -- Graphics Password: Not defined 00:16:30.154 ==> default: -- Video Type: cirrus 00:16:30.154 ==> default: -- Video VRAM: 9216 00:16:30.154 ==> default: -- Sound Type: 00:16:30.154 ==> default: -- Keymap: en-us 00:16:30.154 ==> default: -- TPM Path: 00:16:30.154 ==> default: -- INPUT: type=mouse, bus=ps2 00:16:30.154 ==> default: -- Command line args: 00:16:30.154 ==> default: -> value=-device, 00:16:30.154 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:16:30.154 ==> default: -> value=-drive, 00:16:30.154 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:16:30.154 ==> default: -> value=-device, 00:16:30.154 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:30.154 ==> default: -> value=-device, 00:16:30.154 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:16:30.154 ==> default: -> value=-drive, 00:16:30.154 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:16:30.154 ==> default: -> value=-device, 00:16:30.154 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:30.154 ==> default: -> value=-drive, 00:16:30.154 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:16:30.154 ==> default: -> value=-device, 00:16:30.154 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:30.154 ==> default: -> value=-drive, 00:16:30.154 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:16:30.154 ==> default: -> value=-device, 00:16:30.154 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:30.413 ==> default: Creating shared folders metadata... 00:16:30.413 ==> default: Starting domain. 00:16:31.790 ==> default: Waiting for domain to get an IP address... 00:16:46.658 ==> default: Waiting for SSH to become available... 00:16:48.032 ==> default: Configuring and enabling network interfaces... 00:16:52.239 default: SSH address: 192.168.121.88:22 00:16:52.239 default: SSH username: vagrant 00:16:52.239 default: SSH auth method: private key 00:16:53.638 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:17:01.777 ==> default: Mounting SSHFS shared folder... 00:17:02.709 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:17:02.709 ==> default: Checking Mount.. 00:17:03.642 ==> default: Folder Successfully Mounted! 00:17:03.643 ==> default: Running provisioner: file... 00:17:04.575 default: ~/.gitconfig => .gitconfig 00:17:04.833 00:17:04.833 SUCCESS! 00:17:04.833 00:17:04.833 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:17:04.833 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:17:04.833 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:17:04.833 00:17:04.844 [Pipeline] } 00:17:04.867 [Pipeline] // stage 00:17:04.877 [Pipeline] dir 00:17:04.878 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:17:04.883 [Pipeline] { 00:17:04.903 [Pipeline] catchError 00:17:04.905 [Pipeline] { 00:17:04.937 [Pipeline] sh 00:17:05.214 + vagrant ssh-config --host vagrant 00:17:05.214 + sed -ne /^Host/,$p 00:17:05.214 + tee ssh_conf 00:17:09.395 Host vagrant 00:17:09.395 HostName 192.168.121.88 00:17:09.395 User vagrant 00:17:09.395 Port 22 00:17:09.395 UserKnownHostsFile /dev/null 00:17:09.395 StrictHostKeyChecking no 00:17:09.395 PasswordAuthentication no 00:17:09.395 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:17:09.395 IdentitiesOnly yes 00:17:09.395 LogLevel FATAL 00:17:09.395 ForwardAgent yes 00:17:09.395 ForwardX11 yes 00:17:09.395 00:17:09.410 [Pipeline] withEnv 00:17:09.412 [Pipeline] { 00:17:09.428 [Pipeline] sh 00:17:09.707 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:17:09.707 source /etc/os-release 00:17:09.707 [[ -e /image.version ]] && img=$(< /image.version) 00:17:09.707 # Minimal, systemd-like check. 00:17:09.707 if [[ -e /.dockerenv ]]; then 00:17:09.707 # Clear garbage from the node's name: 00:17:09.707 # agt-er_autotest_547-896 -> autotest_547-896 00:17:09.707 # $HOSTNAME is the actual container id 00:17:09.707 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:17:09.707 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:17:09.707 # We can assume this is a mount from a host where container is running, 00:17:09.707 # so fetch its hostname to easily identify the target swarm worker. 00:17:09.707 container="$(< /etc/hostname) ($agent)" 00:17:09.707 else 00:17:09.707 # Fallback 00:17:09.707 container=$agent 00:17:09.707 fi 00:17:09.707 fi 00:17:09.707 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:17:09.707 00:17:09.719 [Pipeline] } 00:17:09.740 [Pipeline] // withEnv 00:17:09.750 [Pipeline] setCustomBuildProperty 00:17:09.765 [Pipeline] stage 00:17:09.768 [Pipeline] { (Tests) 00:17:09.787 [Pipeline] sh 00:17:10.106 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:17:10.120 [Pipeline] sh 00:17:10.416 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:17:10.429 [Pipeline] timeout 00:17:10.429 Timeout set to expire in 30 min 00:17:10.431 [Pipeline] { 00:17:10.444 [Pipeline] sh 00:17:10.717 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:17:11.283 HEAD is now at dbef7efac test: fix dpdk builds on ubuntu24 00:17:11.298 [Pipeline] sh 00:17:11.576 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:17:11.592 [Pipeline] sh 00:17:11.870 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:17:11.887 [Pipeline] sh 00:17:12.165 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:17:12.165 ++ readlink -f spdk_repo 00:17:12.165 + DIR_ROOT=/home/vagrant/spdk_repo 00:17:12.165 + [[ -n /home/vagrant/spdk_repo ]] 00:17:12.165 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:17:12.165 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:17:12.165 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:17:12.165 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:17:12.165 + [[ -d /home/vagrant/spdk_repo/output ]] 00:17:12.165 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:17:12.165 + cd /home/vagrant/spdk_repo 00:17:12.165 + source /etc/os-release 00:17:12.165 ++ NAME='Fedora Linux' 00:17:12.165 ++ VERSION='38 (Cloud Edition)' 00:17:12.165 ++ ID=fedora 00:17:12.165 ++ VERSION_ID=38 00:17:12.165 ++ VERSION_CODENAME= 00:17:12.165 ++ PLATFORM_ID=platform:f38 00:17:12.165 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:17:12.165 ++ ANSI_COLOR='0;38;2;60;110;180' 00:17:12.165 ++ LOGO=fedora-logo-icon 00:17:12.165 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:17:12.165 ++ HOME_URL=https://fedoraproject.org/ 00:17:12.165 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:17:12.165 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:17:12.165 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:17:12.165 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:17:12.166 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:17:12.166 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:17:12.166 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:17:12.166 ++ SUPPORT_END=2024-05-14 00:17:12.166 ++ VARIANT='Cloud Edition' 00:17:12.166 ++ VARIANT_ID=cloud 00:17:12.166 + uname -a 00:17:12.166 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:17:12.166 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:12.423 Hugepages 00:17:12.423 node hugesize free / total 00:17:12.423 node0 1048576kB 0 / 0 00:17:12.423 node0 2048kB 0 / 0 00:17:12.423 00:17:12.424 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:12.424 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:12.424 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:12.424 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:17:12.424 + rm -f /tmp/spdk-ld-path 00:17:12.424 + source autorun-spdk.conf 00:17:12.424 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:12.424 ++ SPDK_TEST_NVMF=1 00:17:12.424 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:17:12.424 ++ SPDK_TEST_URING=1 00:17:12.424 ++ SPDK_TEST_VFIOUSER=1 00:17:12.424 ++ SPDK_TEST_USDT=1 00:17:12.424 ++ SPDK_RUN_UBSAN=1 00:17:12.424 ++ NET_TYPE=virt 00:17:12.424 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:12.424 ++ RUN_NIGHTLY=1 00:17:12.424 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:17:12.424 + [[ -n '' ]] 00:17:12.424 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:17:12.424 + for M in /var/spdk/build-*-manifest.txt 00:17:12.424 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:17:12.424 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:12.424 + for M in /var/spdk/build-*-manifest.txt 00:17:12.424 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:17:12.424 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:12.424 ++ uname 00:17:12.424 + [[ Linux == \L\i\n\u\x ]] 00:17:12.424 + sudo dmesg -T 00:17:12.687 + sudo dmesg --clear 00:17:12.687 + dmesg_pid=5123 00:17:12.687 + sudo dmesg -Tw 00:17:12.687 + [[ Fedora Linux == FreeBSD ]] 00:17:12.687 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:12.687 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:12.687 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:17:12.687 + [[ -x /usr/src/fio-static/fio ]] 00:17:12.687 + export FIO_BIN=/usr/src/fio-static/fio 00:17:12.687 + FIO_BIN=/usr/src/fio-static/fio 00:17:12.687 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:17:12.687 + [[ ! -v VFIO_QEMU_BIN ]] 00:17:12.687 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:17:12.687 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:12.687 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:12.687 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:17:12.687 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:12.687 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:12.687 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:12.687 Test configuration: 00:17:12.687 SPDK_RUN_FUNCTIONAL_TEST=1 00:17:12.687 SPDK_TEST_NVMF=1 00:17:12.687 SPDK_TEST_NVMF_TRANSPORT=tcp 00:17:12.687 SPDK_TEST_URING=1 00:17:12.687 SPDK_TEST_VFIOUSER=1 00:17:12.687 SPDK_TEST_USDT=1 00:17:12.687 SPDK_RUN_UBSAN=1 00:17:12.687 NET_TYPE=virt 00:17:12.687 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:12.687 RUN_NIGHTLY=1 15:54:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:12.687 15:54:15 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:12.687 15:54:15 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.687 15:54:15 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.687 15:54:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.687 15:54:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.688 15:54:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.688 15:54:15 -- paths/export.sh@5 -- $ export PATH 00:17:12.688 15:54:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.688 15:54:15 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:12.688 15:54:15 -- common/autobuild_common.sh@438 -- $ date +%s 00:17:12.688 15:54:15 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721663655.XXXXXX 00:17:12.688 15:54:15 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721663655.oaa6wx 00:17:12.688 15:54:15 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:17:12.688 15:54:15 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:17:12.688 15:54:15 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:17:12.688 15:54:15 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:12.688 15:54:15 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:12.688 15:54:15 -- common/autobuild_common.sh@454 -- $ get_config_params 00:17:12.688 15:54:15 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:17:12.688 15:54:15 -- common/autotest_common.sh@10 -- $ set +x 00:17:12.688 15:54:15 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:17:12.688 15:54:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:17:12.688 15:54:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:17:12.688 15:54:15 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:17:12.688 15:54:15 -- spdk/autobuild.sh@16 -- $ date -u 00:17:12.688 Mon Jul 22 03:54:15 PM UTC 2024 00:17:12.688 15:54:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:17:12.688 LTS-60-gdbef7efac 00:17:12.688 15:54:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:17:12.688 15:54:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:17:12.688 15:54:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:17:12.688 15:54:15 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:17:12.688 15:54:15 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:17:12.688 15:54:15 -- common/autotest_common.sh@10 -- $ set +x 00:17:12.688 ************************************ 00:17:12.688 START TEST ubsan 00:17:12.688 ************************************ 00:17:12.688 using ubsan 00:17:12.688 15:54:15 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:17:12.688 00:17:12.688 real 0m0.000s 00:17:12.688 user 0m0.000s 00:17:12.688 sys 0m0.000s 00:17:12.688 15:54:15 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:17:12.688 ************************************ 00:17:12.688 END TEST ubsan 00:17:12.688 ************************************ 00:17:12.688 15:54:15 -- common/autotest_common.sh@10 -- $ set +x 00:17:12.688 15:54:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:17:12.688 15:54:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:17:12.688 15:54:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:17:12.688 15:54:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:17:12.688 15:54:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:17:12.688 15:54:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:17:12.688 15:54:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:17:12.688 15:54:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:17:12.688 15:54:15 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:17:12.946 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:12.946 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:17:13.511 Using 'verbs' RDMA provider 00:17:26.276 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:17:38.481 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:17:38.481 Creating mk/config.mk...done. 00:17:38.481 Creating mk/cc.flags.mk...done. 00:17:38.481 Type 'make' to build. 00:17:38.481 15:54:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:17:38.481 15:54:40 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:17:38.481 15:54:40 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:17:38.481 15:54:40 -- common/autotest_common.sh@10 -- $ set +x 00:17:38.482 ************************************ 00:17:38.482 START TEST make 00:17:38.482 ************************************ 00:17:38.482 15:54:40 -- common/autotest_common.sh@1104 -- $ make -j10 00:17:38.482 make[1]: Nothing to be done for 'all'. 00:17:39.415 The Meson build system 00:17:39.415 Version: 1.3.1 00:17:39.415 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:17:39.415 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:17:39.415 Build type: native build 00:17:39.415 Project name: libvfio-user 00:17:39.415 Project version: 0.0.1 00:17:39.415 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:17:39.415 C linker for the host machine: cc ld.bfd 2.39-16 00:17:39.415 Host machine cpu family: x86_64 00:17:39.415 Host machine cpu: x86_64 00:17:39.415 Run-time dependency threads found: YES 00:17:39.415 Library dl found: YES 00:17:39.415 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:17:39.415 Run-time dependency json-c found: YES 0.17 00:17:39.415 Run-time dependency cmocka found: YES 1.1.7 00:17:39.415 Program pytest-3 found: NO 00:17:39.415 Program flake8 found: NO 00:17:39.415 Program misspell-fixer found: NO 00:17:39.415 Program restructuredtext-lint found: NO 00:17:39.415 Program valgrind found: YES (/usr/bin/valgrind) 00:17:39.415 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:17:39.415 Compiler for C supports arguments -Wmissing-declarations: YES 00:17:39.415 Compiler for C supports arguments -Wwrite-strings: YES 00:17:39.415 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:17:39.415 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:17:39.415 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:17:39.415 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:17:39.415 Build targets in project: 8 00:17:39.415 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:17:39.415 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:17:39.415 00:17:39.415 libvfio-user 0.0.1 00:17:39.415 00:17:39.415 User defined options 00:17:39.415 buildtype : debug 00:17:39.415 default_library: shared 00:17:39.415 libdir : /usr/local/lib 00:17:39.415 00:17:39.415 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:17:39.980 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:17:39.980 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:17:39.980 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:17:39.980 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:17:39.980 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:17:39.980 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:17:39.980 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:17:39.980 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:17:40.239 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:17:40.239 [9/37] Compiling C object samples/null.p/null.c.o 00:17:40.239 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:17:40.239 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:17:40.239 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:17:40.239 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:17:40.239 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:17:40.239 [15/37] Compiling C object samples/client.p/client.c.o 00:17:40.239 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:17:40.239 [17/37] Compiling C object samples/server.p/server.c.o 00:17:40.239 [18/37] Linking target samples/client 00:17:40.239 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:17:40.239 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:17:40.239 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:17:40.239 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:17:40.239 [23/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:17:40.239 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:17:40.239 [25/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:17:40.239 [26/37] Linking target lib/libvfio-user.so.0.0.1 00:17:40.497 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:17:40.497 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:17:40.497 [29/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:17:40.497 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:17:40.497 [31/37] Linking target test/unit_tests 00:17:40.497 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:17:40.497 [33/37] Linking target samples/shadow_ioeventfd_server 00:17:40.497 [34/37] Linking target samples/server 00:17:40.497 [35/37] Linking target samples/null 00:17:40.497 [36/37] Linking target samples/lspci 00:17:40.497 [37/37] Linking target samples/gpio-pci-idio-16 00:17:40.497 INFO: autodetecting backend as ninja 00:17:40.497 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:17:40.755 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:17:41.012 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:17:41.012 ninja: no work to do. 00:17:55.884 The Meson build system 00:17:55.884 Version: 1.3.1 00:17:55.884 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:17:55.884 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:17:55.884 Build type: native build 00:17:55.884 Program cat found: YES (/usr/bin/cat) 00:17:55.884 Project name: DPDK 00:17:55.884 Project version: 23.11.0 00:17:55.884 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:17:55.884 C linker for the host machine: cc ld.bfd 2.39-16 00:17:55.884 Host machine cpu family: x86_64 00:17:55.884 Host machine cpu: x86_64 00:17:55.884 Message: ## Building in Developer Mode ## 00:17:55.884 Program pkg-config found: YES (/usr/bin/pkg-config) 00:17:55.884 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:17:55.884 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:17:55.884 Program python3 found: YES (/usr/bin/python3) 00:17:55.885 Program cat found: YES (/usr/bin/cat) 00:17:55.885 Compiler for C supports arguments -march=native: YES 00:17:55.885 Checking for size of "void *" : 8 00:17:55.885 Checking for size of "void *" : 8 (cached) 00:17:55.885 Library m found: YES 00:17:55.885 Library numa found: YES 00:17:55.885 Has header "numaif.h" : YES 00:17:55.885 Library fdt found: NO 00:17:55.885 Library execinfo found: NO 00:17:55.885 Has header "execinfo.h" : YES 00:17:55.885 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:17:55.885 Run-time dependency libarchive found: NO (tried pkgconfig) 00:17:55.885 Run-time dependency libbsd found: NO (tried pkgconfig) 00:17:55.885 Run-time dependency jansson found: NO (tried pkgconfig) 00:17:55.885 Run-time dependency openssl found: YES 3.0.9 00:17:55.885 Run-time dependency libpcap found: YES 1.10.4 00:17:55.885 Has header "pcap.h" with dependency libpcap: YES 00:17:55.885 Compiler for C supports arguments -Wcast-qual: YES 00:17:55.885 Compiler for C supports arguments -Wdeprecated: YES 00:17:55.885 Compiler for C supports arguments -Wformat: YES 00:17:55.885 Compiler for C supports arguments -Wformat-nonliteral: NO 00:17:55.885 Compiler for C supports arguments -Wformat-security: NO 00:17:55.885 Compiler for C supports arguments -Wmissing-declarations: YES 00:17:55.885 Compiler for C supports arguments -Wmissing-prototypes: YES 00:17:55.885 Compiler for C supports arguments -Wnested-externs: YES 00:17:55.885 Compiler for C supports arguments -Wold-style-definition: YES 00:17:55.885 Compiler for C supports arguments -Wpointer-arith: YES 00:17:55.885 Compiler for C supports arguments -Wsign-compare: YES 00:17:55.885 Compiler for C supports arguments -Wstrict-prototypes: YES 00:17:55.885 Compiler for C supports arguments -Wundef: YES 00:17:55.885 Compiler for C supports arguments -Wwrite-strings: YES 00:17:55.885 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:17:55.885 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:17:55.885 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:17:55.885 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:17:55.885 Program objdump found: YES (/usr/bin/objdump) 00:17:55.885 Compiler for C supports arguments -mavx512f: YES 00:17:55.885 Checking if "AVX512 checking" compiles: YES 00:17:55.885 Fetching value of define "__SSE4_2__" : 1 00:17:55.885 Fetching value of define "__AES__" : 1 00:17:55.885 Fetching value of define "__AVX__" : 1 00:17:55.885 Fetching value of define "__AVX2__" : 1 00:17:55.885 Fetching value of define "__AVX512BW__" : (undefined) 00:17:55.885 Fetching value of define "__AVX512CD__" : (undefined) 00:17:55.885 Fetching value of define "__AVX512DQ__" : (undefined) 00:17:55.885 Fetching value of define "__AVX512F__" : (undefined) 00:17:55.885 Fetching value of define "__AVX512VL__" : (undefined) 00:17:55.885 Fetching value of define "__PCLMUL__" : 1 00:17:55.885 Fetching value of define "__RDRND__" : 1 00:17:55.885 Fetching value of define "__RDSEED__" : 1 00:17:55.885 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:17:55.885 Fetching value of define "__znver1__" : (undefined) 00:17:55.885 Fetching value of define "__znver2__" : (undefined) 00:17:55.885 Fetching value of define "__znver3__" : (undefined) 00:17:55.885 Fetching value of define "__znver4__" : (undefined) 00:17:55.885 Compiler for C supports arguments -Wno-format-truncation: YES 00:17:55.885 Message: lib/log: Defining dependency "log" 00:17:55.885 Message: lib/kvargs: Defining dependency "kvargs" 00:17:55.885 Message: lib/telemetry: Defining dependency "telemetry" 00:17:55.885 Checking for function "getentropy" : NO 00:17:55.885 Message: lib/eal: Defining dependency "eal" 00:17:55.885 Message: lib/ring: Defining dependency "ring" 00:17:55.885 Message: lib/rcu: Defining dependency "rcu" 00:17:55.885 Message: lib/mempool: Defining dependency "mempool" 00:17:55.885 Message: lib/mbuf: Defining dependency "mbuf" 00:17:55.885 Fetching value of define "__PCLMUL__" : 1 (cached) 00:17:55.885 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:17:55.885 Compiler for C supports arguments -mpclmul: YES 00:17:55.885 Compiler for C supports arguments -maes: YES 00:17:55.885 Compiler for C supports arguments -mavx512f: YES (cached) 00:17:55.885 Compiler for C supports arguments -mavx512bw: YES 00:17:55.885 Compiler for C supports arguments -mavx512dq: YES 00:17:55.885 Compiler for C supports arguments -mavx512vl: YES 00:17:55.885 Compiler for C supports arguments -mvpclmulqdq: YES 00:17:55.885 Compiler for C supports arguments -mavx2: YES 00:17:55.885 Compiler for C supports arguments -mavx: YES 00:17:55.885 Message: lib/net: Defining dependency "net" 00:17:55.885 Message: lib/meter: Defining dependency "meter" 00:17:55.885 Message: lib/ethdev: Defining dependency "ethdev" 00:17:55.885 Message: lib/pci: Defining dependency "pci" 00:17:55.885 Message: lib/cmdline: Defining dependency "cmdline" 00:17:55.885 Message: lib/hash: Defining dependency "hash" 00:17:55.885 Message: lib/timer: Defining dependency "timer" 00:17:55.885 Message: lib/compressdev: Defining dependency "compressdev" 00:17:55.885 Message: lib/cryptodev: Defining dependency "cryptodev" 00:17:55.885 Message: lib/dmadev: Defining dependency "dmadev" 00:17:55.885 Compiler for C supports arguments -Wno-cast-qual: YES 00:17:55.885 Message: lib/power: Defining dependency "power" 00:17:55.885 Message: lib/reorder: Defining dependency "reorder" 00:17:55.885 Message: lib/security: Defining dependency "security" 00:17:55.885 Has header "linux/userfaultfd.h" : YES 00:17:55.885 Has header "linux/vduse.h" : YES 00:17:55.885 Message: lib/vhost: Defining dependency "vhost" 00:17:55.885 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:17:55.885 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:17:55.885 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:17:55.885 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:17:55.885 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:17:55.885 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:17:55.885 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:17:55.885 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:17:55.885 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:17:55.885 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:17:55.885 Program doxygen found: YES (/usr/bin/doxygen) 00:17:55.885 Configuring doxy-api-html.conf using configuration 00:17:55.885 Configuring doxy-api-man.conf using configuration 00:17:55.885 Program mandb found: YES (/usr/bin/mandb) 00:17:55.885 Program sphinx-build found: NO 00:17:55.885 Configuring rte_build_config.h using configuration 00:17:55.885 Message: 00:17:55.885 ================= 00:17:55.885 Applications Enabled 00:17:55.885 ================= 00:17:55.885 00:17:55.885 apps: 00:17:55.885 00:17:55.885 00:17:55.885 Message: 00:17:55.885 ================= 00:17:55.885 Libraries Enabled 00:17:55.885 ================= 00:17:55.885 00:17:55.885 libs: 00:17:55.885 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:17:55.885 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:17:55.885 cryptodev, dmadev, power, reorder, security, vhost, 00:17:55.885 00:17:55.885 Message: 00:17:55.885 =============== 00:17:55.885 Drivers Enabled 00:17:55.885 =============== 00:17:55.885 00:17:55.885 common: 00:17:55.885 00:17:55.885 bus: 00:17:55.885 pci, vdev, 00:17:55.885 mempool: 00:17:55.885 ring, 00:17:55.885 dma: 00:17:55.885 00:17:55.885 net: 00:17:55.885 00:17:55.885 crypto: 00:17:55.885 00:17:55.885 compress: 00:17:55.885 00:17:55.885 vdpa: 00:17:55.885 00:17:55.885 00:17:55.885 Message: 00:17:55.885 ================= 00:17:55.885 Content Skipped 00:17:55.885 ================= 00:17:55.885 00:17:55.885 apps: 00:17:55.885 dumpcap: explicitly disabled via build config 00:17:55.885 graph: explicitly disabled via build config 00:17:55.885 pdump: explicitly disabled via build config 00:17:55.885 proc-info: explicitly disabled via build config 00:17:55.885 test-acl: explicitly disabled via build config 00:17:55.885 test-bbdev: explicitly disabled via build config 00:17:55.885 test-cmdline: explicitly disabled via build config 00:17:55.885 test-compress-perf: explicitly disabled via build config 00:17:55.885 test-crypto-perf: explicitly disabled via build config 00:17:55.885 test-dma-perf: explicitly disabled via build config 00:17:55.885 test-eventdev: explicitly disabled via build config 00:17:55.885 test-fib: explicitly disabled via build config 00:17:55.885 test-flow-perf: explicitly disabled via build config 00:17:55.885 test-gpudev: explicitly disabled via build config 00:17:55.885 test-mldev: explicitly disabled via build config 00:17:55.885 test-pipeline: explicitly disabled via build config 00:17:55.885 test-pmd: explicitly disabled via build config 00:17:55.885 test-regex: explicitly disabled via build config 00:17:55.885 test-sad: explicitly disabled via build config 00:17:55.885 test-security-perf: explicitly disabled via build config 00:17:55.885 00:17:55.885 libs: 00:17:55.885 metrics: explicitly disabled via build config 00:17:55.885 acl: explicitly disabled via build config 00:17:55.885 bbdev: explicitly disabled via build config 00:17:55.885 bitratestats: explicitly disabled via build config 00:17:55.885 bpf: explicitly disabled via build config 00:17:55.885 cfgfile: explicitly disabled via build config 00:17:55.885 distributor: explicitly disabled via build config 00:17:55.885 efd: explicitly disabled via build config 00:17:55.885 eventdev: explicitly disabled via build config 00:17:55.885 dispatcher: explicitly disabled via build config 00:17:55.885 gpudev: explicitly disabled via build config 00:17:55.885 gro: explicitly disabled via build config 00:17:55.885 gso: explicitly disabled via build config 00:17:55.885 ip_frag: explicitly disabled via build config 00:17:55.885 jobstats: explicitly disabled via build config 00:17:55.885 latencystats: explicitly disabled via build config 00:17:55.886 lpm: explicitly disabled via build config 00:17:55.886 member: explicitly disabled via build config 00:17:55.886 pcapng: explicitly disabled via build config 00:17:55.886 rawdev: explicitly disabled via build config 00:17:55.886 regexdev: explicitly disabled via build config 00:17:55.886 mldev: explicitly disabled via build config 00:17:55.886 rib: explicitly disabled via build config 00:17:55.886 sched: explicitly disabled via build config 00:17:55.886 stack: explicitly disabled via build config 00:17:55.886 ipsec: explicitly disabled via build config 00:17:55.886 pdcp: explicitly disabled via build config 00:17:55.886 fib: explicitly disabled via build config 00:17:55.886 port: explicitly disabled via build config 00:17:55.886 pdump: explicitly disabled via build config 00:17:55.886 table: explicitly disabled via build config 00:17:55.886 pipeline: explicitly disabled via build config 00:17:55.886 graph: explicitly disabled via build config 00:17:55.886 node: explicitly disabled via build config 00:17:55.886 00:17:55.886 drivers: 00:17:55.886 common/cpt: not in enabled drivers build config 00:17:55.886 common/dpaax: not in enabled drivers build config 00:17:55.886 common/iavf: not in enabled drivers build config 00:17:55.886 common/idpf: not in enabled drivers build config 00:17:55.886 common/mvep: not in enabled drivers build config 00:17:55.886 common/octeontx: not in enabled drivers build config 00:17:55.886 bus/auxiliary: not in enabled drivers build config 00:17:55.886 bus/cdx: not in enabled drivers build config 00:17:55.886 bus/dpaa: not in enabled drivers build config 00:17:55.886 bus/fslmc: not in enabled drivers build config 00:17:55.886 bus/ifpga: not in enabled drivers build config 00:17:55.886 bus/platform: not in enabled drivers build config 00:17:55.886 bus/vmbus: not in enabled drivers build config 00:17:55.886 common/cnxk: not in enabled drivers build config 00:17:55.886 common/mlx5: not in enabled drivers build config 00:17:55.886 common/nfp: not in enabled drivers build config 00:17:55.886 common/qat: not in enabled drivers build config 00:17:55.886 common/sfc_efx: not in enabled drivers build config 00:17:55.886 mempool/bucket: not in enabled drivers build config 00:17:55.886 mempool/cnxk: not in enabled drivers build config 00:17:55.886 mempool/dpaa: not in enabled drivers build config 00:17:55.886 mempool/dpaa2: not in enabled drivers build config 00:17:55.886 mempool/octeontx: not in enabled drivers build config 00:17:55.886 mempool/stack: not in enabled drivers build config 00:17:55.886 dma/cnxk: not in enabled drivers build config 00:17:55.886 dma/dpaa: not in enabled drivers build config 00:17:55.886 dma/dpaa2: not in enabled drivers build config 00:17:55.886 dma/hisilicon: not in enabled drivers build config 00:17:55.886 dma/idxd: not in enabled drivers build config 00:17:55.886 dma/ioat: not in enabled drivers build config 00:17:55.886 dma/skeleton: not in enabled drivers build config 00:17:55.886 net/af_packet: not in enabled drivers build config 00:17:55.886 net/af_xdp: not in enabled drivers build config 00:17:55.886 net/ark: not in enabled drivers build config 00:17:55.886 net/atlantic: not in enabled drivers build config 00:17:55.886 net/avp: not in enabled drivers build config 00:17:55.886 net/axgbe: not in enabled drivers build config 00:17:55.886 net/bnx2x: not in enabled drivers build config 00:17:55.886 net/bnxt: not in enabled drivers build config 00:17:55.886 net/bonding: not in enabled drivers build config 00:17:55.886 net/cnxk: not in enabled drivers build config 00:17:55.886 net/cpfl: not in enabled drivers build config 00:17:55.886 net/cxgbe: not in enabled drivers build config 00:17:55.886 net/dpaa: not in enabled drivers build config 00:17:55.886 net/dpaa2: not in enabled drivers build config 00:17:55.886 net/e1000: not in enabled drivers build config 00:17:55.886 net/ena: not in enabled drivers build config 00:17:55.886 net/enetc: not in enabled drivers build config 00:17:55.886 net/enetfec: not in enabled drivers build config 00:17:55.886 net/enic: not in enabled drivers build config 00:17:55.886 net/failsafe: not in enabled drivers build config 00:17:55.886 net/fm10k: not in enabled drivers build config 00:17:55.886 net/gve: not in enabled drivers build config 00:17:55.886 net/hinic: not in enabled drivers build config 00:17:55.886 net/hns3: not in enabled drivers build config 00:17:55.886 net/i40e: not in enabled drivers build config 00:17:55.886 net/iavf: not in enabled drivers build config 00:17:55.886 net/ice: not in enabled drivers build config 00:17:55.886 net/idpf: not in enabled drivers build config 00:17:55.886 net/igc: not in enabled drivers build config 00:17:55.886 net/ionic: not in enabled drivers build config 00:17:55.886 net/ipn3ke: not in enabled drivers build config 00:17:55.886 net/ixgbe: not in enabled drivers build config 00:17:55.886 net/mana: not in enabled drivers build config 00:17:55.886 net/memif: not in enabled drivers build config 00:17:55.886 net/mlx4: not in enabled drivers build config 00:17:55.886 net/mlx5: not in enabled drivers build config 00:17:55.886 net/mvneta: not in enabled drivers build config 00:17:55.886 net/mvpp2: not in enabled drivers build config 00:17:55.886 net/netvsc: not in enabled drivers build config 00:17:55.886 net/nfb: not in enabled drivers build config 00:17:55.886 net/nfp: not in enabled drivers build config 00:17:55.886 net/ngbe: not in enabled drivers build config 00:17:55.886 net/null: not in enabled drivers build config 00:17:55.886 net/octeontx: not in enabled drivers build config 00:17:55.886 net/octeon_ep: not in enabled drivers build config 00:17:55.886 net/pcap: not in enabled drivers build config 00:17:55.886 net/pfe: not in enabled drivers build config 00:17:55.886 net/qede: not in enabled drivers build config 00:17:55.886 net/ring: not in enabled drivers build config 00:17:55.886 net/sfc: not in enabled drivers build config 00:17:55.886 net/softnic: not in enabled drivers build config 00:17:55.886 net/tap: not in enabled drivers build config 00:17:55.886 net/thunderx: not in enabled drivers build config 00:17:55.886 net/txgbe: not in enabled drivers build config 00:17:55.886 net/vdev_netvsc: not in enabled drivers build config 00:17:55.886 net/vhost: not in enabled drivers build config 00:17:55.886 net/virtio: not in enabled drivers build config 00:17:55.886 net/vmxnet3: not in enabled drivers build config 00:17:55.886 raw/*: missing internal dependency, "rawdev" 00:17:55.886 crypto/armv8: not in enabled drivers build config 00:17:55.886 crypto/bcmfs: not in enabled drivers build config 00:17:55.886 crypto/caam_jr: not in enabled drivers build config 00:17:55.886 crypto/ccp: not in enabled drivers build config 00:17:55.886 crypto/cnxk: not in enabled drivers build config 00:17:55.886 crypto/dpaa_sec: not in enabled drivers build config 00:17:55.886 crypto/dpaa2_sec: not in enabled drivers build config 00:17:55.886 crypto/ipsec_mb: not in enabled drivers build config 00:17:55.886 crypto/mlx5: not in enabled drivers build config 00:17:55.886 crypto/mvsam: not in enabled drivers build config 00:17:55.886 crypto/nitrox: not in enabled drivers build config 00:17:55.886 crypto/null: not in enabled drivers build config 00:17:55.886 crypto/octeontx: not in enabled drivers build config 00:17:55.886 crypto/openssl: not in enabled drivers build config 00:17:55.886 crypto/scheduler: not in enabled drivers build config 00:17:55.886 crypto/uadk: not in enabled drivers build config 00:17:55.886 crypto/virtio: not in enabled drivers build config 00:17:55.886 compress/isal: not in enabled drivers build config 00:17:55.886 compress/mlx5: not in enabled drivers build config 00:17:55.886 compress/octeontx: not in enabled drivers build config 00:17:55.886 compress/zlib: not in enabled drivers build config 00:17:55.886 regex/*: missing internal dependency, "regexdev" 00:17:55.886 ml/*: missing internal dependency, "mldev" 00:17:55.886 vdpa/ifc: not in enabled drivers build config 00:17:55.886 vdpa/mlx5: not in enabled drivers build config 00:17:55.886 vdpa/nfp: not in enabled drivers build config 00:17:55.886 vdpa/sfc: not in enabled drivers build config 00:17:55.886 event/*: missing internal dependency, "eventdev" 00:17:55.886 baseband/*: missing internal dependency, "bbdev" 00:17:55.886 gpu/*: missing internal dependency, "gpudev" 00:17:55.886 00:17:55.886 00:17:55.886 Build targets in project: 85 00:17:55.886 00:17:55.886 DPDK 23.11.0 00:17:55.886 00:17:55.886 User defined options 00:17:55.886 buildtype : debug 00:17:55.886 default_library : shared 00:17:55.886 libdir : lib 00:17:55.886 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:17:55.886 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:17:55.886 c_link_args : 00:17:55.886 cpu_instruction_set: native 00:17:55.886 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:17:55.886 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:17:55.886 enable_docs : false 00:17:55.886 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:17:55.886 enable_kmods : false 00:17:55.886 tests : false 00:17:55.886 00:17:55.886 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:17:55.886 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:17:55.886 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:17:55.886 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:17:55.886 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:17:55.886 [4/265] Linking static target lib/librte_kvargs.a 00:17:55.886 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:17:55.886 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:17:55.886 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:17:55.886 [8/265] Linking static target lib/librte_log.a 00:17:55.886 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:17:55.886 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:17:55.886 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:17:55.886 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:17:56.144 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:17:56.144 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:17:56.144 [15/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:17:56.401 [16/265] Linking target lib/librte_log.so.24.0 00:17:56.658 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:17:56.658 [18/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:17:56.658 [19/265] Linking static target lib/librte_telemetry.a 00:17:56.658 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:17:56.658 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:17:56.658 [22/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:17:56.921 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:17:56.921 [24/265] Linking target lib/librte_kvargs.so.24.0 00:17:56.921 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:17:57.192 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:17:57.192 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:17:57.192 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:17:57.449 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:17:57.449 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:17:57.706 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:17:57.706 [32/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:17:57.706 [33/265] Linking target lib/librte_telemetry.so.24.0 00:17:58.271 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:17:58.271 [35/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:17:58.271 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:17:58.529 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:17:58.529 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:17:58.529 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:17:58.529 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:17:58.529 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:17:58.786 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:17:58.786 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:17:58.786 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:17:58.786 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:17:58.786 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:17:59.043 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:17:59.043 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:17:59.978 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:17:59.978 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:17:59.978 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:17:59.978 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:17:59.978 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:18:00.236 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:18:00.236 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:18:00.236 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:18:00.236 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:18:00.236 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:18:00.236 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:18:00.494 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:18:00.494 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:18:00.751 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:18:01.068 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:18:01.068 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:18:01.328 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:18:01.586 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:18:01.586 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:18:01.586 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:18:01.586 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:18:01.586 [70/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:18:01.843 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:18:01.843 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:18:01.843 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:18:01.843 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:18:02.101 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:18:02.101 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:18:02.101 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:18:02.101 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:18:02.667 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:18:02.667 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:18:02.925 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:18:03.184 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:18:03.184 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:18:03.184 [84/265] Linking static target lib/librte_ring.a 00:18:03.184 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:18:03.184 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:18:03.442 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:18:03.442 [88/265] Linking static target lib/librte_eal.a 00:18:03.699 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:18:03.699 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:18:03.958 [91/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:18:04.215 [92/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:18:04.215 [93/265] Linking static target lib/librte_rcu.a 00:18:04.215 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:18:04.215 [95/265] Linking static target lib/librte_mempool.a 00:18:04.478 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:18:04.736 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:18:04.736 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:18:04.736 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:18:04.994 [100/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:18:04.994 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:18:04.994 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:18:05.251 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:18:05.251 [104/265] Linking static target lib/librte_mbuf.a 00:18:05.509 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:18:05.767 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:18:06.024 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:18:06.024 [108/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:18:06.024 [109/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:18:06.024 [110/265] Linking static target lib/librte_net.a 00:18:06.024 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:18:06.590 [112/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:18:06.590 [113/265] Linking static target lib/librte_meter.a 00:18:06.590 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:18:06.590 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:18:06.848 [116/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:18:06.848 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:18:07.106 [118/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:18:07.106 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:18:07.671 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:18:07.929 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:18:08.188 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:18:08.188 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:18:08.188 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:18:08.449 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:18:08.449 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:18:08.449 [127/265] Linking static target lib/librte_pci.a 00:18:08.449 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:18:08.449 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:18:08.449 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:18:08.707 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:18:08.707 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:18:08.707 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:18:08.707 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:18:08.707 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:18:08.966 [136/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:18:08.966 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:18:08.966 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:18:08.966 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:18:08.966 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:18:08.966 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:18:09.224 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:18:09.224 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:18:09.482 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:18:09.482 [145/265] Linking static target lib/librte_cmdline.a 00:18:09.740 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:18:09.998 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:18:10.261 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:18:10.261 [149/265] Linking static target lib/librte_timer.a 00:18:10.261 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:18:10.261 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:18:10.519 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:18:10.519 [153/265] Linking static target lib/librte_ethdev.a 00:18:10.777 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:18:11.035 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:18:11.035 [156/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:18:11.292 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:18:11.292 [158/265] Linking static target lib/librte_compressdev.a 00:18:11.292 [159/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:18:11.292 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:18:11.292 [161/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:18:11.292 [162/265] Linking static target lib/librte_hash.a 00:18:11.858 [163/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:18:11.858 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:18:11.858 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:18:12.138 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:18:12.138 [167/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:18:12.138 [168/265] Linking static target lib/librte_dmadev.a 00:18:12.396 [169/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:12.396 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:18:12.654 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:18:12.654 [172/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:18:12.654 [173/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:18:12.654 [174/265] Linking static target lib/librte_cryptodev.a 00:18:12.654 [175/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:18:12.911 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:13.169 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:18:13.427 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:18:13.427 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:18:13.427 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:18:13.685 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:18:13.943 [182/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:18:13.943 [183/265] Linking static target lib/librte_security.a 00:18:13.943 [184/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:18:13.943 [185/265] Linking static target lib/librte_power.a 00:18:14.199 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:18:14.199 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:18:14.457 [188/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:18:14.457 [189/265] Linking static target lib/librte_reorder.a 00:18:14.715 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:18:14.973 [191/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:18:15.230 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:18:15.230 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:18:15.488 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:18:15.747 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:18:15.747 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:18:15.747 [197/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:16.005 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:18:16.263 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:18:16.263 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:18:16.263 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:18:16.520 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:18:16.520 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:18:16.520 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:18:16.520 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:18:16.778 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:18:16.778 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:18:16.778 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:18:16.778 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:18:17.036 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:18:17.036 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:18:17.036 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:18:17.036 [213/265] Linking static target drivers/librte_bus_vdev.a 00:18:17.036 [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:18:17.036 [215/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:18:17.036 [216/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:18:17.036 [217/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:18:17.036 [218/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:18:17.036 [219/265] Linking static target drivers/librte_bus_pci.a 00:18:17.294 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:18:17.294 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:18:17.294 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:18:17.294 [223/265] Linking static target drivers/librte_mempool_ring.a 00:18:17.294 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:17.860 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:18:17.860 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:18:17.860 [227/265] Linking target lib/librte_eal.so.24.0 00:18:18.118 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:18:18.118 [229/265] Linking target drivers/librte_bus_vdev.so.24.0 00:18:18.118 [230/265] Linking target lib/librte_ring.so.24.0 00:18:18.118 [231/265] Linking target lib/librte_meter.so.24.0 00:18:18.118 [232/265] Linking target lib/librte_pci.so.24.0 00:18:18.118 [233/265] Linking target lib/librte_timer.so.24.0 00:18:18.118 [234/265] Linking target lib/librte_dmadev.so.24.0 00:18:18.118 [235/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:18:18.118 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:18:18.376 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:18:18.376 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:18:18.376 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:18:18.376 [240/265] Linking target drivers/librte_bus_pci.so.24.0 00:18:18.376 [241/265] Linking target lib/librte_mempool.so.24.0 00:18:18.376 [242/265] Linking target lib/librte_rcu.so.24.0 00:18:18.376 [243/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:18:18.376 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:18:18.376 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:18:18.376 [246/265] Linking target lib/librte_mbuf.so.24.0 00:18:18.634 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:18:18.634 [248/265] Linking target lib/librte_compressdev.so.24.0 00:18:18.634 [249/265] Linking target lib/librte_net.so.24.0 00:18:18.634 [250/265] Linking target lib/librte_reorder.so.24.0 00:18:18.634 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:18:18.892 [252/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:18:18.892 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:18:18.892 [254/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:18:18.892 [255/265] Linking static target lib/librte_vhost.a 00:18:18.892 [256/265] Linking target lib/librte_hash.so.24.0 00:18:18.892 [257/265] Linking target lib/librte_security.so.24.0 00:18:18.892 [258/265] Linking target lib/librte_cmdline.so.24.0 00:18:19.149 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:18:19.406 [260/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:19.664 [261/265] Linking target lib/librte_ethdev.so.24.0 00:18:19.921 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:18:19.921 [263/265] Linking target lib/librte_power.so.24.0 00:18:20.522 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:18:20.522 [265/265] Linking target lib/librte_vhost.so.24.0 00:18:20.522 INFO: autodetecting backend as ninja 00:18:20.522 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:18:21.456 CC lib/ut_mock/mock.o 00:18:21.456 CC lib/log/log.o 00:18:21.456 CC lib/ut/ut.o 00:18:21.456 CC lib/log/log_deprecated.o 00:18:21.456 CC lib/log/log_flags.o 00:18:21.713 LIB libspdk_ut_mock.a 00:18:21.713 SO libspdk_ut_mock.so.5.0 00:18:21.713 LIB libspdk_ut.a 00:18:21.713 LIB libspdk_log.a 00:18:21.713 SO libspdk_ut.so.1.0 00:18:21.713 SYMLINK libspdk_ut_mock.so 00:18:21.713 SO libspdk_log.so.6.1 00:18:21.713 SYMLINK libspdk_ut.so 00:18:21.971 SYMLINK libspdk_log.so 00:18:21.971 CXX lib/trace_parser/trace.o 00:18:21.971 CC lib/util/cpuset.o 00:18:21.971 CC lib/util/bit_array.o 00:18:21.971 CC lib/util/base64.o 00:18:21.971 CC lib/dma/dma.o 00:18:21.971 CC lib/util/crc32.o 00:18:21.971 CC lib/util/crc16.o 00:18:21.971 CC lib/util/crc32c.o 00:18:21.971 CC lib/ioat/ioat.o 00:18:21.971 CC lib/vfio_user/host/vfio_user_pci.o 00:18:22.228 CC lib/util/crc32_ieee.o 00:18:22.228 CC lib/util/crc64.o 00:18:22.228 CC lib/util/dif.o 00:18:22.228 LIB libspdk_dma.a 00:18:22.228 CC lib/util/fd.o 00:18:22.228 SO libspdk_dma.so.3.0 00:18:22.228 CC lib/util/file.o 00:18:22.228 CC lib/util/hexlify.o 00:18:22.486 SYMLINK libspdk_dma.so 00:18:22.486 CC lib/util/iov.o 00:18:22.486 CC lib/vfio_user/host/vfio_user.o 00:18:22.486 CC lib/util/math.o 00:18:22.486 CC lib/util/pipe.o 00:18:22.486 LIB libspdk_ioat.a 00:18:22.486 SO libspdk_ioat.so.6.0 00:18:22.486 CC lib/util/strerror_tls.o 00:18:22.486 CC lib/util/string.o 00:18:22.486 CC lib/util/uuid.o 00:18:22.486 SYMLINK libspdk_ioat.so 00:18:22.486 CC lib/util/fd_group.o 00:18:22.486 CC lib/util/xor.o 00:18:22.486 CC lib/util/zipf.o 00:18:22.486 LIB libspdk_vfio_user.a 00:18:22.744 SO libspdk_vfio_user.so.4.0 00:18:22.744 SYMLINK libspdk_vfio_user.so 00:18:23.001 LIB libspdk_util.a 00:18:23.259 LIB libspdk_trace_parser.a 00:18:23.259 SO libspdk_util.so.8.0 00:18:23.259 SO libspdk_trace_parser.so.4.0 00:18:23.259 SYMLINK libspdk_trace_parser.so 00:18:23.259 SYMLINK libspdk_util.so 00:18:23.516 CC lib/rdma/common.o 00:18:23.516 CC lib/rdma/rdma_verbs.o 00:18:23.516 CC lib/idxd/idxd_user.o 00:18:23.516 CC lib/idxd/idxd.o 00:18:23.516 CC lib/conf/conf.o 00:18:23.516 CC lib/idxd/idxd_kernel.o 00:18:23.516 CC lib/env_dpdk/env.o 00:18:23.516 CC lib/env_dpdk/memory.o 00:18:23.516 CC lib/json/json_parse.o 00:18:23.516 CC lib/vmd/vmd.o 00:18:23.774 CC lib/vmd/led.o 00:18:23.774 CC lib/json/json_util.o 00:18:23.774 CC lib/json/json_write.o 00:18:23.774 CC lib/env_dpdk/pci.o 00:18:23.774 LIB libspdk_conf.a 00:18:24.031 CC lib/env_dpdk/init.o 00:18:24.031 SO libspdk_conf.so.5.0 00:18:24.031 LIB libspdk_rdma.a 00:18:24.031 SO libspdk_rdma.so.5.0 00:18:24.031 SYMLINK libspdk_conf.so 00:18:24.031 CC lib/env_dpdk/threads.o 00:18:24.031 CC lib/env_dpdk/pci_ioat.o 00:18:24.031 SYMLINK libspdk_rdma.so 00:18:24.031 CC lib/env_dpdk/pci_virtio.o 00:18:24.031 LIB libspdk_idxd.a 00:18:24.031 LIB libspdk_json.a 00:18:24.289 SO libspdk_idxd.so.11.0 00:18:24.289 SO libspdk_json.so.5.1 00:18:24.289 CC lib/env_dpdk/pci_vmd.o 00:18:24.289 CC lib/env_dpdk/pci_idxd.o 00:18:24.289 SYMLINK libspdk_idxd.so 00:18:24.289 CC lib/env_dpdk/pci_event.o 00:18:24.289 CC lib/env_dpdk/sigbus_handler.o 00:18:24.289 SYMLINK libspdk_json.so 00:18:24.289 LIB libspdk_vmd.a 00:18:24.289 CC lib/env_dpdk/pci_dpdk.o 00:18:24.289 CC lib/env_dpdk/pci_dpdk_2207.o 00:18:24.289 SO libspdk_vmd.so.5.0 00:18:24.289 CC lib/env_dpdk/pci_dpdk_2211.o 00:18:24.546 SYMLINK libspdk_vmd.so 00:18:24.546 CC lib/jsonrpc/jsonrpc_server.o 00:18:24.546 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:18:24.546 CC lib/jsonrpc/jsonrpc_client.o 00:18:24.546 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:18:24.805 LIB libspdk_jsonrpc.a 00:18:24.805 SO libspdk_jsonrpc.so.5.1 00:18:24.805 SYMLINK libspdk_jsonrpc.so 00:18:25.063 LIB libspdk_env_dpdk.a 00:18:25.063 CC lib/rpc/rpc.o 00:18:25.063 SO libspdk_env_dpdk.so.13.0 00:18:25.321 SYMLINK libspdk_env_dpdk.so 00:18:25.321 LIB libspdk_rpc.a 00:18:25.321 SO libspdk_rpc.so.5.0 00:18:25.321 SYMLINK libspdk_rpc.so 00:18:25.579 CC lib/sock/sock.o 00:18:25.579 CC lib/sock/sock_rpc.o 00:18:25.579 CC lib/trace/trace.o 00:18:25.579 CC lib/trace/trace_flags.o 00:18:25.579 CC lib/trace/trace_rpc.o 00:18:25.579 CC lib/notify/notify.o 00:18:25.579 CC lib/notify/notify_rpc.o 00:18:25.837 LIB libspdk_notify.a 00:18:25.837 LIB libspdk_trace.a 00:18:25.837 SO libspdk_notify.so.5.0 00:18:25.837 SO libspdk_trace.so.9.0 00:18:26.094 SYMLINK libspdk_notify.so 00:18:26.094 SYMLINK libspdk_trace.so 00:18:26.094 CC lib/thread/thread.o 00:18:26.094 CC lib/thread/iobuf.o 00:18:26.094 LIB libspdk_sock.a 00:18:26.351 SO libspdk_sock.so.8.0 00:18:26.352 SYMLINK libspdk_sock.so 00:18:26.609 CC lib/nvme/nvme_ctrlr_cmd.o 00:18:26.609 CC lib/nvme/nvme_ctrlr.o 00:18:26.609 CC lib/nvme/nvme_fabric.o 00:18:26.609 CC lib/nvme/nvme_ns_cmd.o 00:18:26.609 CC lib/nvme/nvme_ns.o 00:18:26.609 CC lib/nvme/nvme_pcie_common.o 00:18:26.609 CC lib/nvme/nvme_pcie.o 00:18:26.609 CC lib/nvme/nvme_qpair.o 00:18:26.609 CC lib/nvme/nvme.o 00:18:27.542 CC lib/nvme/nvme_quirks.o 00:18:27.800 CC lib/nvme/nvme_transport.o 00:18:27.800 CC lib/nvme/nvme_discovery.o 00:18:27.800 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:18:27.800 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:18:27.800 CC lib/nvme/nvme_tcp.o 00:18:28.058 LIB libspdk_thread.a 00:18:28.058 CC lib/nvme/nvme_opal.o 00:18:28.058 SO libspdk_thread.so.9.0 00:18:28.316 SYMLINK libspdk_thread.so 00:18:28.316 CC lib/nvme/nvme_io_msg.o 00:18:28.316 CC lib/nvme/nvme_poll_group.o 00:18:28.574 CC lib/nvme/nvme_zns.o 00:18:28.574 CC lib/nvme/nvme_cuse.o 00:18:28.833 CC lib/nvme/nvme_vfio_user.o 00:18:28.833 CC lib/nvme/nvme_rdma.o 00:18:29.090 CC lib/accel/accel.o 00:18:29.090 CC lib/blob/blobstore.o 00:18:29.090 CC lib/blob/request.o 00:18:29.090 CC lib/blob/zeroes.o 00:18:29.348 CC lib/accel/accel_rpc.o 00:18:29.348 CC lib/accel/accel_sw.o 00:18:29.348 CC lib/blob/blob_bs_dev.o 00:18:29.606 CC lib/init/json_config.o 00:18:29.606 CC lib/init/subsystem.o 00:18:29.606 CC lib/virtio/virtio.o 00:18:29.606 CC lib/vfu_tgt/tgt_endpoint.o 00:18:29.606 CC lib/virtio/virtio_vhost_user.o 00:18:29.863 CC lib/virtio/virtio_vfio_user.o 00:18:29.863 CC lib/virtio/virtio_pci.o 00:18:29.863 CC lib/init/subsystem_rpc.o 00:18:29.863 CC lib/vfu_tgt/tgt_rpc.o 00:18:29.863 CC lib/init/rpc.o 00:18:30.121 LIB libspdk_init.a 00:18:30.121 LIB libspdk_vfu_tgt.a 00:18:30.121 LIB libspdk_virtio.a 00:18:30.121 SO libspdk_init.so.4.0 00:18:30.121 SO libspdk_vfu_tgt.so.2.0 00:18:30.121 SO libspdk_virtio.so.6.0 00:18:30.121 LIB libspdk_nvme.a 00:18:30.378 SYMLINK libspdk_init.so 00:18:30.378 SYMLINK libspdk_vfu_tgt.so 00:18:30.378 SYMLINK libspdk_virtio.so 00:18:30.378 SO libspdk_nvme.so.12.0 00:18:30.378 CC lib/event/app.o 00:18:30.378 CC lib/event/reactor.o 00:18:30.378 CC lib/event/log_rpc.o 00:18:30.378 CC lib/event/app_rpc.o 00:18:30.378 CC lib/event/scheduler_static.o 00:18:30.636 SYMLINK libspdk_nvme.so 00:18:30.636 LIB libspdk_accel.a 00:18:30.894 SO libspdk_accel.so.14.0 00:18:30.894 SYMLINK libspdk_accel.so 00:18:30.894 LIB libspdk_event.a 00:18:30.894 SO libspdk_event.so.12.0 00:18:31.153 CC lib/bdev/bdev_rpc.o 00:18:31.153 CC lib/bdev/bdev.o 00:18:31.153 CC lib/bdev/bdev_zone.o 00:18:31.153 CC lib/bdev/scsi_nvme.o 00:18:31.153 CC lib/bdev/part.o 00:18:31.153 SYMLINK libspdk_event.so 00:18:32.088 LIB libspdk_blob.a 00:18:32.088 SO libspdk_blob.so.10.1 00:18:32.346 SYMLINK libspdk_blob.so 00:18:32.346 CC lib/blobfs/tree.o 00:18:32.346 CC lib/blobfs/blobfs.o 00:18:32.604 CC lib/lvol/lvol.o 00:18:33.537 LIB libspdk_blobfs.a 00:18:33.538 SO libspdk_blobfs.so.9.0 00:18:33.538 LIB libspdk_lvol.a 00:18:33.538 SYMLINK libspdk_blobfs.so 00:18:33.538 SO libspdk_lvol.so.9.1 00:18:33.538 SYMLINK libspdk_lvol.so 00:18:33.795 LIB libspdk_bdev.a 00:18:33.796 SO libspdk_bdev.so.14.0 00:18:34.054 SYMLINK libspdk_bdev.so 00:18:34.054 CC lib/nvmf/ctrlr.o 00:18:34.054 CC lib/nvmf/ctrlr_bdev.o 00:18:34.054 CC lib/nvmf/ctrlr_discovery.o 00:18:34.054 CC lib/nvmf/subsystem.o 00:18:34.054 CC lib/nvmf/nvmf.o 00:18:34.054 CC lib/nvmf/nvmf_rpc.o 00:18:34.054 CC lib/nbd/nbd.o 00:18:34.054 CC lib/ftl/ftl_core.o 00:18:34.054 CC lib/ublk/ublk.o 00:18:34.054 CC lib/scsi/dev.o 00:18:34.621 CC lib/scsi/lun.o 00:18:34.621 CC lib/ftl/ftl_init.o 00:18:34.621 CC lib/nbd/nbd_rpc.o 00:18:34.621 CC lib/ftl/ftl_layout.o 00:18:34.879 CC lib/ublk/ublk_rpc.o 00:18:34.879 LIB libspdk_nbd.a 00:18:34.879 SO libspdk_nbd.so.6.0 00:18:34.879 CC lib/scsi/port.o 00:18:34.879 SYMLINK libspdk_nbd.so 00:18:34.879 CC lib/scsi/scsi.o 00:18:34.879 CC lib/nvmf/transport.o 00:18:35.138 CC lib/nvmf/tcp.o 00:18:35.138 CC lib/ftl/ftl_debug.o 00:18:35.138 CC lib/nvmf/vfio_user.o 00:18:35.138 LIB libspdk_ublk.a 00:18:35.138 CC lib/scsi/scsi_bdev.o 00:18:35.138 CC lib/scsi/scsi_pr.o 00:18:35.138 SO libspdk_ublk.so.2.0 00:18:35.138 SYMLINK libspdk_ublk.so 00:18:35.138 CC lib/scsi/scsi_rpc.o 00:18:35.138 CC lib/scsi/task.o 00:18:35.395 CC lib/ftl/ftl_io.o 00:18:35.395 CC lib/ftl/ftl_sb.o 00:18:35.395 CC lib/nvmf/rdma.o 00:18:35.395 CC lib/ftl/ftl_l2p.o 00:18:35.654 CC lib/ftl/ftl_l2p_flat.o 00:18:35.654 LIB libspdk_scsi.a 00:18:35.654 CC lib/ftl/ftl_nv_cache.o 00:18:35.654 CC lib/ftl/ftl_band.o 00:18:35.654 CC lib/ftl/ftl_band_ops.o 00:18:35.654 SO libspdk_scsi.so.8.0 00:18:35.654 CC lib/ftl/ftl_writer.o 00:18:35.654 CC lib/ftl/ftl_rq.o 00:18:35.654 SYMLINK libspdk_scsi.so 00:18:35.913 CC lib/iscsi/conn.o 00:18:35.913 CC lib/iscsi/init_grp.o 00:18:35.913 CC lib/ftl/ftl_reloc.o 00:18:35.913 CC lib/ftl/ftl_l2p_cache.o 00:18:35.913 CC lib/ftl/ftl_p2l.o 00:18:36.171 CC lib/ftl/mngt/ftl_mngt.o 00:18:36.429 CC lib/vhost/vhost.o 00:18:36.429 CC lib/vhost/vhost_rpc.o 00:18:36.429 CC lib/vhost/vhost_scsi.o 00:18:36.429 CC lib/iscsi/iscsi.o 00:18:36.429 CC lib/iscsi/md5.o 00:18:36.429 CC lib/vhost/vhost_blk.o 00:18:36.429 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:18:36.688 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:18:36.688 CC lib/ftl/mngt/ftl_mngt_startup.o 00:18:36.688 CC lib/vhost/rte_vhost_user.o 00:18:36.688 CC lib/iscsi/param.o 00:18:36.688 CC lib/iscsi/portal_grp.o 00:18:36.953 CC lib/ftl/mngt/ftl_mngt_md.o 00:18:36.953 CC lib/iscsi/tgt_node.o 00:18:37.212 CC lib/iscsi/iscsi_subsystem.o 00:18:37.212 CC lib/iscsi/iscsi_rpc.o 00:18:37.212 CC lib/iscsi/task.o 00:18:37.212 CC lib/ftl/mngt/ftl_mngt_misc.o 00:18:37.212 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:18:37.470 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:18:37.470 CC lib/ftl/mngt/ftl_mngt_band.o 00:18:37.470 LIB libspdk_nvmf.a 00:18:37.470 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:18:37.470 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:18:37.728 SO libspdk_nvmf.so.17.0 00:18:37.728 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:18:37.728 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:18:37.728 CC lib/ftl/utils/ftl_conf.o 00:18:37.728 CC lib/ftl/utils/ftl_md.o 00:18:37.728 CC lib/ftl/utils/ftl_mempool.o 00:18:37.728 CC lib/ftl/utils/ftl_bitmap.o 00:18:37.728 CC lib/ftl/utils/ftl_property.o 00:18:37.986 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:18:37.986 LIB libspdk_vhost.a 00:18:37.986 LIB libspdk_iscsi.a 00:18:37.986 SYMLINK libspdk_nvmf.so 00:18:37.986 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:18:37.986 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:18:37.986 SO libspdk_vhost.so.7.1 00:18:37.986 SO libspdk_iscsi.so.7.0 00:18:37.986 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:18:37.986 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:18:37.986 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:18:37.986 SYMLINK libspdk_vhost.so 00:18:37.986 CC lib/ftl/upgrade/ftl_sb_v3.o 00:18:38.244 CC lib/ftl/upgrade/ftl_sb_v5.o 00:18:38.244 CC lib/ftl/nvc/ftl_nvc_dev.o 00:18:38.244 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:18:38.244 SYMLINK libspdk_iscsi.so 00:18:38.244 CC lib/ftl/base/ftl_base_dev.o 00:18:38.244 CC lib/ftl/base/ftl_base_bdev.o 00:18:38.244 CC lib/ftl/ftl_trace.o 00:18:38.501 LIB libspdk_ftl.a 00:18:38.759 SO libspdk_ftl.so.8.0 00:18:39.323 SYMLINK libspdk_ftl.so 00:18:39.582 CC module/env_dpdk/env_dpdk_rpc.o 00:18:39.582 CC module/vfu_device/vfu_virtio.o 00:18:39.582 CC module/sock/posix/posix.o 00:18:39.582 CC module/scheduler/dynamic/scheduler_dynamic.o 00:18:39.582 CC module/accel/ioat/accel_ioat.o 00:18:39.582 CC module/sock/uring/uring.o 00:18:39.582 CC module/accel/error/accel_error.o 00:18:39.582 CC module/blob/bdev/blob_bdev.o 00:18:39.582 CC module/accel/iaa/accel_iaa.o 00:18:39.582 CC module/accel/dsa/accel_dsa.o 00:18:39.582 LIB libspdk_env_dpdk_rpc.a 00:18:39.582 SO libspdk_env_dpdk_rpc.so.5.0 00:18:39.840 SYMLINK libspdk_env_dpdk_rpc.so 00:18:39.840 CC module/accel/ioat/accel_ioat_rpc.o 00:18:39.840 CC module/accel/dsa/accel_dsa_rpc.o 00:18:39.840 CC module/accel/error/accel_error_rpc.o 00:18:39.840 LIB libspdk_scheduler_dynamic.a 00:18:39.840 SO libspdk_scheduler_dynamic.so.3.0 00:18:39.840 LIB libspdk_blob_bdev.a 00:18:39.840 LIB libspdk_accel_ioat.a 00:18:39.840 SO libspdk_blob_bdev.so.10.1 00:18:39.840 SO libspdk_accel_ioat.so.5.0 00:18:39.840 CC module/accel/iaa/accel_iaa_rpc.o 00:18:39.840 LIB libspdk_accel_dsa.a 00:18:39.840 SYMLINK libspdk_scheduler_dynamic.so 00:18:39.840 LIB libspdk_accel_error.a 00:18:40.098 SYMLINK libspdk_blob_bdev.so 00:18:40.098 CC module/vfu_device/vfu_virtio_blk.o 00:18:40.098 SO libspdk_accel_error.so.1.0 00:18:40.099 SO libspdk_accel_dsa.so.4.0 00:18:40.099 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:18:40.099 SYMLINK libspdk_accel_ioat.so 00:18:40.099 SYMLINK libspdk_accel_dsa.so 00:18:40.099 CC module/vfu_device/vfu_virtio_scsi.o 00:18:40.099 SYMLINK libspdk_accel_error.so 00:18:40.099 CC module/vfu_device/vfu_virtio_rpc.o 00:18:40.099 CC module/scheduler/gscheduler/gscheduler.o 00:18:40.099 LIB libspdk_accel_iaa.a 00:18:40.099 LIB libspdk_scheduler_dpdk_governor.a 00:18:40.356 SO libspdk_accel_iaa.so.2.0 00:18:40.356 LIB libspdk_sock_posix.a 00:18:40.356 SO libspdk_scheduler_dpdk_governor.so.3.0 00:18:40.356 CC module/bdev/delay/vbdev_delay.o 00:18:40.356 SO libspdk_sock_posix.so.5.0 00:18:40.356 LIB libspdk_scheduler_gscheduler.a 00:18:40.356 SYMLINK libspdk_accel_iaa.so 00:18:40.356 SO libspdk_scheduler_gscheduler.so.3.0 00:18:40.356 SYMLINK libspdk_scheduler_dpdk_governor.so 00:18:40.356 CC module/bdev/delay/vbdev_delay_rpc.o 00:18:40.356 SYMLINK libspdk_sock_posix.so 00:18:40.356 SYMLINK libspdk_scheduler_gscheduler.so 00:18:40.356 LIB libspdk_sock_uring.a 00:18:40.356 SO libspdk_sock_uring.so.4.0 00:18:40.356 LIB libspdk_vfu_device.a 00:18:40.613 CC module/blobfs/bdev/blobfs_bdev.o 00:18:40.613 CC module/bdev/error/vbdev_error.o 00:18:40.613 SO libspdk_vfu_device.so.2.0 00:18:40.613 CC module/bdev/gpt/gpt.o 00:18:40.613 SYMLINK libspdk_sock_uring.so 00:18:40.613 CC module/bdev/error/vbdev_error_rpc.o 00:18:40.613 CC module/bdev/lvol/vbdev_lvol.o 00:18:40.613 CC module/bdev/malloc/bdev_malloc.o 00:18:40.613 CC module/bdev/null/bdev_null.o 00:18:40.613 SYMLINK libspdk_vfu_device.so 00:18:40.613 CC module/bdev/malloc/bdev_malloc_rpc.o 00:18:40.613 CC module/bdev/nvme/bdev_nvme.o 00:18:40.613 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:18:40.877 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:18:40.877 CC module/bdev/gpt/vbdev_gpt.o 00:18:40.877 LIB libspdk_bdev_error.a 00:18:40.877 LIB libspdk_bdev_delay.a 00:18:40.877 CC module/bdev/null/bdev_null_rpc.o 00:18:40.877 SO libspdk_bdev_delay.so.5.0 00:18:40.877 SO libspdk_bdev_error.so.5.0 00:18:40.877 LIB libspdk_blobfs_bdev.a 00:18:40.877 SYMLINK libspdk_bdev_delay.so 00:18:40.877 SYMLINK libspdk_bdev_error.so 00:18:40.877 CC module/bdev/nvme/bdev_nvme_rpc.o 00:18:40.877 CC module/bdev/nvme/nvme_rpc.o 00:18:40.877 SO libspdk_blobfs_bdev.so.5.0 00:18:40.877 LIB libspdk_bdev_malloc.a 00:18:41.161 LIB libspdk_bdev_null.a 00:18:41.161 SYMLINK libspdk_blobfs_bdev.so 00:18:41.161 SO libspdk_bdev_malloc.so.5.0 00:18:41.161 SO libspdk_bdev_null.so.5.0 00:18:41.161 LIB libspdk_bdev_gpt.a 00:18:41.161 CC module/bdev/passthru/vbdev_passthru.o 00:18:41.161 SO libspdk_bdev_gpt.so.5.0 00:18:41.161 SYMLINK libspdk_bdev_malloc.so 00:18:41.161 SYMLINK libspdk_bdev_null.so 00:18:41.161 CC module/bdev/nvme/bdev_mdns_client.o 00:18:41.161 SYMLINK libspdk_bdev_gpt.so 00:18:41.161 CC module/bdev/raid/bdev_raid.o 00:18:41.161 CC module/bdev/nvme/vbdev_opal.o 00:18:41.161 LIB libspdk_bdev_lvol.a 00:18:41.161 SO libspdk_bdev_lvol.so.5.0 00:18:41.161 CC module/bdev/split/vbdev_split.o 00:18:41.420 CC module/bdev/nvme/vbdev_opal_rpc.o 00:18:41.420 CC module/bdev/zone_block/vbdev_zone_block.o 00:18:41.420 SYMLINK libspdk_bdev_lvol.so 00:18:41.420 CC module/bdev/raid/bdev_raid_rpc.o 00:18:41.420 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:18:41.420 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:18:41.420 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:18:41.678 CC module/bdev/split/vbdev_split_rpc.o 00:18:41.678 CC module/bdev/raid/bdev_raid_sb.o 00:18:41.678 LIB libspdk_bdev_passthru.a 00:18:41.678 SO libspdk_bdev_passthru.so.5.0 00:18:41.678 CC module/bdev/uring/bdev_uring.o 00:18:41.678 LIB libspdk_bdev_zone_block.a 00:18:41.678 CC module/bdev/ftl/bdev_ftl.o 00:18:41.678 CC module/bdev/aio/bdev_aio.o 00:18:41.678 SO libspdk_bdev_zone_block.so.5.0 00:18:41.678 SYMLINK libspdk_bdev_passthru.so 00:18:41.678 LIB libspdk_bdev_split.a 00:18:41.936 SO libspdk_bdev_split.so.5.0 00:18:41.936 CC module/bdev/iscsi/bdev_iscsi.o 00:18:41.936 SYMLINK libspdk_bdev_zone_block.so 00:18:41.936 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:18:41.936 SYMLINK libspdk_bdev_split.so 00:18:41.936 CC module/bdev/ftl/bdev_ftl_rpc.o 00:18:41.936 CC module/bdev/virtio/bdev_virtio_scsi.o 00:18:41.936 CC module/bdev/virtio/bdev_virtio_blk.o 00:18:41.936 CC module/bdev/virtio/bdev_virtio_rpc.o 00:18:41.936 CC module/bdev/uring/bdev_uring_rpc.o 00:18:42.194 CC module/bdev/raid/raid0.o 00:18:42.194 CC module/bdev/raid/raid1.o 00:18:42.194 CC module/bdev/aio/bdev_aio_rpc.o 00:18:42.194 LIB libspdk_bdev_ftl.a 00:18:42.194 SO libspdk_bdev_ftl.so.5.0 00:18:42.194 LIB libspdk_bdev_iscsi.a 00:18:42.194 LIB libspdk_bdev_uring.a 00:18:42.194 SO libspdk_bdev_iscsi.so.5.0 00:18:42.194 SO libspdk_bdev_uring.so.5.0 00:18:42.194 SYMLINK libspdk_bdev_ftl.so 00:18:42.194 CC module/bdev/raid/concat.o 00:18:42.194 LIB libspdk_bdev_aio.a 00:18:42.453 SYMLINK libspdk_bdev_uring.so 00:18:42.453 SO libspdk_bdev_aio.so.5.0 00:18:42.453 SYMLINK libspdk_bdev_iscsi.so 00:18:42.453 SYMLINK libspdk_bdev_aio.so 00:18:42.453 LIB libspdk_bdev_virtio.a 00:18:42.453 LIB libspdk_bdev_raid.a 00:18:42.712 SO libspdk_bdev_virtio.so.5.0 00:18:42.712 SO libspdk_bdev_raid.so.5.0 00:18:42.712 SYMLINK libspdk_bdev_virtio.so 00:18:42.712 SYMLINK libspdk_bdev_raid.so 00:18:43.280 LIB libspdk_bdev_nvme.a 00:18:43.280 SO libspdk_bdev_nvme.so.6.0 00:18:43.280 SYMLINK libspdk_bdev_nvme.so 00:18:43.538 CC module/event/subsystems/scheduler/scheduler.o 00:18:43.538 CC module/event/subsystems/sock/sock.o 00:18:43.538 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:18:43.538 CC module/event/subsystems/iobuf/iobuf.o 00:18:43.538 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:18:43.538 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:18:43.538 CC module/event/subsystems/vmd/vmd.o 00:18:43.538 CC module/event/subsystems/vmd/vmd_rpc.o 00:18:43.796 LIB libspdk_event_vfu_tgt.a 00:18:43.796 LIB libspdk_event_sock.a 00:18:43.797 LIB libspdk_event_vhost_blk.a 00:18:43.797 LIB libspdk_event_scheduler.a 00:18:43.797 SO libspdk_event_vfu_tgt.so.2.0 00:18:43.797 LIB libspdk_event_vmd.a 00:18:43.797 LIB libspdk_event_iobuf.a 00:18:43.797 SO libspdk_event_vhost_blk.so.2.0 00:18:43.797 SO libspdk_event_sock.so.4.0 00:18:43.797 SO libspdk_event_scheduler.so.3.0 00:18:43.797 SO libspdk_event_vmd.so.5.0 00:18:43.797 SO libspdk_event_iobuf.so.2.0 00:18:43.797 SYMLINK libspdk_event_vfu_tgt.so 00:18:43.797 SYMLINK libspdk_event_sock.so 00:18:43.797 SYMLINK libspdk_event_vhost_blk.so 00:18:44.055 SYMLINK libspdk_event_iobuf.so 00:18:44.055 SYMLINK libspdk_event_scheduler.so 00:18:44.055 SYMLINK libspdk_event_vmd.so 00:18:44.055 CC module/event/subsystems/accel/accel.o 00:18:44.313 LIB libspdk_event_accel.a 00:18:44.313 SO libspdk_event_accel.so.5.0 00:18:44.313 SYMLINK libspdk_event_accel.so 00:18:44.571 CC module/event/subsystems/bdev/bdev.o 00:18:44.830 LIB libspdk_event_bdev.a 00:18:44.830 SO libspdk_event_bdev.so.5.0 00:18:44.830 SYMLINK libspdk_event_bdev.so 00:18:45.089 CC module/event/subsystems/ublk/ublk.o 00:18:45.089 CC module/event/subsystems/scsi/scsi.o 00:18:45.089 CC module/event/subsystems/nbd/nbd.o 00:18:45.089 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:18:45.089 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:18:45.089 LIB libspdk_event_nbd.a 00:18:45.089 LIB libspdk_event_ublk.a 00:18:45.089 SO libspdk_event_nbd.so.5.0 00:18:45.089 LIB libspdk_event_scsi.a 00:18:45.089 SO libspdk_event_ublk.so.2.0 00:18:45.348 SYMLINK libspdk_event_nbd.so 00:18:45.348 SO libspdk_event_scsi.so.5.0 00:18:45.348 LIB libspdk_event_nvmf.a 00:18:45.348 SYMLINK libspdk_event_ublk.so 00:18:45.348 SO libspdk_event_nvmf.so.5.0 00:18:45.348 SYMLINK libspdk_event_scsi.so 00:18:45.348 SYMLINK libspdk_event_nvmf.so 00:18:45.348 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:18:45.348 CC module/event/subsystems/iscsi/iscsi.o 00:18:45.607 LIB libspdk_event_vhost_scsi.a 00:18:45.607 LIB libspdk_event_iscsi.a 00:18:45.607 SO libspdk_event_vhost_scsi.so.2.0 00:18:45.607 SO libspdk_event_iscsi.so.5.0 00:18:45.607 SYMLINK libspdk_event_vhost_scsi.so 00:18:45.865 SYMLINK libspdk_event_iscsi.so 00:18:45.865 SO libspdk.so.5.0 00:18:45.865 SYMLINK libspdk.so 00:18:46.124 CXX app/trace/trace.o 00:18:46.124 CC app/spdk_lspci/spdk_lspci.o 00:18:46.124 CC app/spdk_nvme_perf/perf.o 00:18:46.124 CC app/trace_record/trace_record.o 00:18:46.124 CC app/spdk_nvme_identify/identify.o 00:18:46.124 CC app/iscsi_tgt/iscsi_tgt.o 00:18:46.124 CC app/nvmf_tgt/nvmf_main.o 00:18:46.124 CC examples/accel/perf/accel_perf.o 00:18:46.124 CC app/spdk_tgt/spdk_tgt.o 00:18:46.124 LINK spdk_lspci 00:18:46.124 CC test/accel/dif/dif.o 00:18:46.382 LINK nvmf_tgt 00:18:46.382 LINK spdk_trace_record 00:18:46.382 LINK iscsi_tgt 00:18:46.382 LINK spdk_tgt 00:18:46.640 LINK spdk_trace 00:18:46.640 CC examples/bdev/hello_world/hello_bdev.o 00:18:46.640 LINK dif 00:18:46.640 CC app/spdk_nvme_discover/discovery_aer.o 00:18:46.640 CC examples/blob/hello_world/hello_blob.o 00:18:46.899 CC test/app/bdev_svc/bdev_svc.o 00:18:46.899 CC test/bdev/bdevio/bdevio.o 00:18:46.899 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:18:46.899 LINK accel_perf 00:18:46.899 LINK hello_bdev 00:18:46.899 LINK spdk_nvme_discover 00:18:47.158 LINK hello_blob 00:18:47.158 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:18:47.158 LINK bdev_svc 00:18:47.158 LINK spdk_nvme_perf 00:18:47.158 CC examples/bdev/bdevperf/bdevperf.o 00:18:47.158 LINK spdk_nvme_identify 00:18:47.416 LINK bdevio 00:18:47.416 CC examples/blob/cli/blobcli.o 00:18:47.416 CC examples/ioat/perf/perf.o 00:18:47.416 CC app/spdk_top/spdk_top.o 00:18:47.416 CC examples/nvme/hello_world/hello_world.o 00:18:47.416 LINK nvme_fuzz 00:18:47.416 CC examples/sock/hello_world/hello_sock.o 00:18:47.416 CC examples/ioat/verify/verify.o 00:18:47.674 LINK ioat_perf 00:18:47.674 CC examples/vmd/lsvmd/lsvmd.o 00:18:47.674 LINK hello_world 00:18:47.674 LINK hello_sock 00:18:47.675 LINK verify 00:18:47.675 CC examples/nvmf/nvmf/nvmf.o 00:18:47.933 LINK blobcli 00:18:47.933 LINK lsvmd 00:18:47.933 CC examples/util/zipf/zipf.o 00:18:47.933 CC examples/nvme/reconnect/reconnect.o 00:18:47.933 CC test/app/histogram_perf/histogram_perf.o 00:18:47.933 LINK bdevperf 00:18:48.192 LINK zipf 00:18:48.192 CC examples/thread/thread/thread_ex.o 00:18:48.192 LINK nvmf 00:18:48.192 CC examples/vmd/led/led.o 00:18:48.192 LINK histogram_perf 00:18:48.192 CC app/vhost/vhost.o 00:18:48.192 LINK reconnect 00:18:48.192 LINK led 00:18:48.450 LINK spdk_top 00:18:48.450 CC test/app/jsoncat/jsoncat.o 00:18:48.450 CC app/spdk_dd/spdk_dd.o 00:18:48.450 LINK thread 00:18:48.450 CC app/fio/nvme/fio_plugin.o 00:18:48.450 CC test/app/stub/stub.o 00:18:48.450 LINK vhost 00:18:48.450 CC examples/nvme/nvme_manage/nvme_manage.o 00:18:48.450 TEST_HEADER include/spdk/accel.h 00:18:48.450 TEST_HEADER include/spdk/accel_module.h 00:18:48.450 TEST_HEADER include/spdk/assert.h 00:18:48.450 TEST_HEADER include/spdk/barrier.h 00:18:48.709 LINK jsoncat 00:18:48.709 TEST_HEADER include/spdk/base64.h 00:18:48.709 TEST_HEADER include/spdk/bdev.h 00:18:48.709 TEST_HEADER include/spdk/bdev_module.h 00:18:48.709 TEST_HEADER include/spdk/bdev_zone.h 00:18:48.709 TEST_HEADER include/spdk/bit_array.h 00:18:48.709 TEST_HEADER include/spdk/bit_pool.h 00:18:48.709 TEST_HEADER include/spdk/blob_bdev.h 00:18:48.709 TEST_HEADER include/spdk/blobfs_bdev.h 00:18:48.709 TEST_HEADER include/spdk/blobfs.h 00:18:48.709 TEST_HEADER include/spdk/blob.h 00:18:48.709 TEST_HEADER include/spdk/conf.h 00:18:48.709 TEST_HEADER include/spdk/config.h 00:18:48.709 TEST_HEADER include/spdk/cpuset.h 00:18:48.709 TEST_HEADER include/spdk/crc16.h 00:18:48.709 TEST_HEADER include/spdk/crc32.h 00:18:48.709 TEST_HEADER include/spdk/crc64.h 00:18:48.709 TEST_HEADER include/spdk/dif.h 00:18:48.709 TEST_HEADER include/spdk/dma.h 00:18:48.709 TEST_HEADER include/spdk/endian.h 00:18:48.709 TEST_HEADER include/spdk/env_dpdk.h 00:18:48.709 TEST_HEADER include/spdk/env.h 00:18:48.709 TEST_HEADER include/spdk/event.h 00:18:48.709 TEST_HEADER include/spdk/fd_group.h 00:18:48.709 TEST_HEADER include/spdk/fd.h 00:18:48.709 TEST_HEADER include/spdk/file.h 00:18:48.709 TEST_HEADER include/spdk/ftl.h 00:18:48.709 TEST_HEADER include/spdk/gpt_spec.h 00:18:48.709 TEST_HEADER include/spdk/hexlify.h 00:18:48.709 TEST_HEADER include/spdk/histogram_data.h 00:18:48.709 TEST_HEADER include/spdk/idxd.h 00:18:48.709 TEST_HEADER include/spdk/idxd_spec.h 00:18:48.709 TEST_HEADER include/spdk/init.h 00:18:48.709 TEST_HEADER include/spdk/ioat.h 00:18:48.709 TEST_HEADER include/spdk/ioat_spec.h 00:18:48.709 CC test/blobfs/mkfs/mkfs.o 00:18:48.709 TEST_HEADER include/spdk/iscsi_spec.h 00:18:48.709 TEST_HEADER include/spdk/json.h 00:18:48.709 LINK stub 00:18:48.709 TEST_HEADER include/spdk/jsonrpc.h 00:18:48.709 TEST_HEADER include/spdk/likely.h 00:18:48.709 TEST_HEADER include/spdk/log.h 00:18:48.709 TEST_HEADER include/spdk/lvol.h 00:18:48.709 TEST_HEADER include/spdk/memory.h 00:18:48.709 TEST_HEADER include/spdk/mmio.h 00:18:48.709 TEST_HEADER include/spdk/nbd.h 00:18:48.709 TEST_HEADER include/spdk/notify.h 00:18:48.709 TEST_HEADER include/spdk/nvme.h 00:18:48.709 TEST_HEADER include/spdk/nvme_intel.h 00:18:48.709 TEST_HEADER include/spdk/nvme_ocssd.h 00:18:48.709 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:18:48.709 TEST_HEADER include/spdk/nvme_spec.h 00:18:48.709 TEST_HEADER include/spdk/nvme_zns.h 00:18:48.709 TEST_HEADER include/spdk/nvmf_cmd.h 00:18:48.709 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:18:48.709 TEST_HEADER include/spdk/nvmf.h 00:18:48.709 TEST_HEADER include/spdk/nvmf_spec.h 00:18:48.710 TEST_HEADER include/spdk/nvmf_transport.h 00:18:48.710 TEST_HEADER include/spdk/opal.h 00:18:48.710 TEST_HEADER include/spdk/opal_spec.h 00:18:48.710 TEST_HEADER include/spdk/pci_ids.h 00:18:48.710 TEST_HEADER include/spdk/pipe.h 00:18:48.710 TEST_HEADER include/spdk/queue.h 00:18:48.710 TEST_HEADER include/spdk/reduce.h 00:18:48.710 TEST_HEADER include/spdk/rpc.h 00:18:48.710 TEST_HEADER include/spdk/scheduler.h 00:18:48.710 CC examples/nvme/arbitration/arbitration.o 00:18:48.710 TEST_HEADER include/spdk/scsi.h 00:18:48.710 TEST_HEADER include/spdk/scsi_spec.h 00:18:48.710 TEST_HEADER include/spdk/sock.h 00:18:48.710 TEST_HEADER include/spdk/stdinc.h 00:18:48.710 TEST_HEADER include/spdk/string.h 00:18:48.710 TEST_HEADER include/spdk/thread.h 00:18:48.710 TEST_HEADER include/spdk/trace.h 00:18:48.710 TEST_HEADER include/spdk/trace_parser.h 00:18:48.710 TEST_HEADER include/spdk/tree.h 00:18:48.710 TEST_HEADER include/spdk/ublk.h 00:18:48.710 TEST_HEADER include/spdk/util.h 00:18:48.710 TEST_HEADER include/spdk/uuid.h 00:18:48.710 TEST_HEADER include/spdk/version.h 00:18:48.710 TEST_HEADER include/spdk/vfio_user_pci.h 00:18:48.710 TEST_HEADER include/spdk/vfio_user_spec.h 00:18:48.710 TEST_HEADER include/spdk/vhost.h 00:18:48.710 TEST_HEADER include/spdk/vmd.h 00:18:48.710 TEST_HEADER include/spdk/xor.h 00:18:48.710 TEST_HEADER include/spdk/zipf.h 00:18:48.710 CXX test/cpp_headers/accel.o 00:18:48.710 CXX test/cpp_headers/accel_module.o 00:18:48.710 CC examples/nvme/hotplug/hotplug.o 00:18:48.968 LINK spdk_dd 00:18:48.968 LINK mkfs 00:18:48.968 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:18:48.968 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:18:48.968 CXX test/cpp_headers/assert.o 00:18:48.968 LINK iscsi_fuzz 00:18:48.968 LINK nvme_manage 00:18:48.968 LINK arbitration 00:18:48.968 LINK hotplug 00:18:48.968 CXX test/cpp_headers/barrier.o 00:18:48.968 LINK spdk_nvme 00:18:49.226 CXX test/cpp_headers/base64.o 00:18:49.226 CC test/dma/test_dma/test_dma.o 00:18:49.226 CXX test/cpp_headers/bdev.o 00:18:49.226 CC app/fio/bdev/fio_plugin.o 00:18:49.226 CC test/env/vtophys/vtophys.o 00:18:49.226 CC examples/nvme/cmb_copy/cmb_copy.o 00:18:49.226 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:18:49.485 CC test/env/mem_callbacks/mem_callbacks.o 00:18:49.485 CC test/env/memory/memory_ut.o 00:18:49.485 LINK vhost_fuzz 00:18:49.485 CC examples/idxd/perf/perf.o 00:18:49.485 LINK vtophys 00:18:49.485 LINK env_dpdk_post_init 00:18:49.485 LINK cmb_copy 00:18:49.743 CXX test/cpp_headers/bdev_module.o 00:18:49.743 CXX test/cpp_headers/bdev_zone.o 00:18:49.743 CXX test/cpp_headers/bit_array.o 00:18:49.743 LINK test_dma 00:18:49.743 CC test/event/event_perf/event_perf.o 00:18:49.743 LINK spdk_bdev 00:18:50.001 CC examples/nvme/abort/abort.o 00:18:50.002 CXX test/cpp_headers/bit_pool.o 00:18:50.002 LINK idxd_perf 00:18:50.002 CC test/rpc_client/rpc_client_test.o 00:18:50.002 CC test/nvme/aer/aer.o 00:18:50.002 CC test/env/pci/pci_ut.o 00:18:50.002 CC test/lvol/esnap/esnap.o 00:18:50.002 LINK mem_callbacks 00:18:50.002 LINK event_perf 00:18:50.260 CXX test/cpp_headers/blob_bdev.o 00:18:50.260 CXX test/cpp_headers/blobfs_bdev.o 00:18:50.260 CC examples/interrupt_tgt/interrupt_tgt.o 00:18:50.260 LINK rpc_client_test 00:18:50.260 CC test/event/reactor/reactor.o 00:18:50.519 LINK memory_ut 00:18:50.519 LINK abort 00:18:50.519 LINK aer 00:18:50.519 CXX test/cpp_headers/blobfs.o 00:18:50.519 LINK interrupt_tgt 00:18:50.519 LINK pci_ut 00:18:50.519 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:18:50.519 LINK reactor 00:18:50.519 CC test/thread/poller_perf/poller_perf.o 00:18:50.519 CXX test/cpp_headers/blob.o 00:18:50.519 CXX test/cpp_headers/conf.o 00:18:50.777 CC test/nvme/reset/reset.o 00:18:50.777 CC test/nvme/sgl/sgl.o 00:18:50.777 CC test/event/reactor_perf/reactor_perf.o 00:18:50.777 LINK poller_perf 00:18:50.777 LINK pmr_persistence 00:18:50.777 CXX test/cpp_headers/config.o 00:18:50.777 CXX test/cpp_headers/cpuset.o 00:18:50.777 CC test/event/app_repeat/app_repeat.o 00:18:50.777 CC test/nvme/e2edp/nvme_dp.o 00:18:51.035 LINK reactor_perf 00:18:51.035 CC test/event/scheduler/scheduler.o 00:18:51.035 LINK reset 00:18:51.035 CC test/nvme/err_injection/err_injection.o 00:18:51.035 CC test/nvme/overhead/overhead.o 00:18:51.035 LINK app_repeat 00:18:51.035 LINK sgl 00:18:51.035 CXX test/cpp_headers/crc16.o 00:18:51.035 CC test/nvme/startup/startup.o 00:18:51.293 LINK nvme_dp 00:18:51.293 LINK err_injection 00:18:51.293 LINK scheduler 00:18:51.293 CXX test/cpp_headers/crc32.o 00:18:51.293 CC test/nvme/reserve/reserve.o 00:18:51.293 CC test/nvme/simple_copy/simple_copy.o 00:18:51.293 LINK overhead 00:18:51.293 LINK startup 00:18:51.293 CC test/nvme/connect_stress/connect_stress.o 00:18:51.293 CXX test/cpp_headers/crc64.o 00:18:51.552 CC test/nvme/compliance/nvme_compliance.o 00:18:51.552 CC test/nvme/boot_partition/boot_partition.o 00:18:51.552 LINK reserve 00:18:51.552 LINK simple_copy 00:18:51.552 LINK connect_stress 00:18:51.552 CC test/nvme/fused_ordering/fused_ordering.o 00:18:51.552 CC test/nvme/doorbell_aers/doorbell_aers.o 00:18:51.552 CC test/nvme/fdp/fdp.o 00:18:51.552 LINK boot_partition 00:18:51.552 CXX test/cpp_headers/dif.o 00:18:51.811 CXX test/cpp_headers/dma.o 00:18:51.811 CC test/nvme/cuse/cuse.o 00:18:51.811 CXX test/cpp_headers/endian.o 00:18:51.811 LINK fused_ordering 00:18:51.811 LINK doorbell_aers 00:18:51.811 CXX test/cpp_headers/env_dpdk.o 00:18:51.811 LINK nvme_compliance 00:18:51.811 CXX test/cpp_headers/env.o 00:18:51.811 CXX test/cpp_headers/event.o 00:18:51.811 CXX test/cpp_headers/fd_group.o 00:18:51.811 LINK fdp 00:18:52.069 CXX test/cpp_headers/fd.o 00:18:52.069 CXX test/cpp_headers/file.o 00:18:52.069 CXX test/cpp_headers/ftl.o 00:18:52.069 CXX test/cpp_headers/gpt_spec.o 00:18:52.069 CXX test/cpp_headers/hexlify.o 00:18:52.069 CXX test/cpp_headers/histogram_data.o 00:18:52.069 CXX test/cpp_headers/idxd.o 00:18:52.069 CXX test/cpp_headers/idxd_spec.o 00:18:52.069 CXX test/cpp_headers/init.o 00:18:52.327 CXX test/cpp_headers/ioat.o 00:18:52.328 CXX test/cpp_headers/ioat_spec.o 00:18:52.328 CXX test/cpp_headers/iscsi_spec.o 00:18:52.328 CXX test/cpp_headers/json.o 00:18:52.328 CXX test/cpp_headers/jsonrpc.o 00:18:52.328 CXX test/cpp_headers/likely.o 00:18:52.328 CXX test/cpp_headers/log.o 00:18:52.328 CXX test/cpp_headers/lvol.o 00:18:52.328 CXX test/cpp_headers/mmio.o 00:18:52.328 CXX test/cpp_headers/memory.o 00:18:52.328 CXX test/cpp_headers/nbd.o 00:18:52.328 CXX test/cpp_headers/notify.o 00:18:52.586 CXX test/cpp_headers/nvme.o 00:18:52.586 CXX test/cpp_headers/nvme_intel.o 00:18:52.586 CXX test/cpp_headers/nvme_ocssd.o 00:18:52.586 CXX test/cpp_headers/nvme_ocssd_spec.o 00:18:52.586 CXX test/cpp_headers/nvme_spec.o 00:18:52.586 CXX test/cpp_headers/nvme_zns.o 00:18:52.586 CXX test/cpp_headers/nvmf_cmd.o 00:18:52.586 CXX test/cpp_headers/nvmf_fc_spec.o 00:18:52.586 CXX test/cpp_headers/nvmf.o 00:18:52.586 CXX test/cpp_headers/nvmf_spec.o 00:18:52.586 CXX test/cpp_headers/nvmf_transport.o 00:18:52.586 CXX test/cpp_headers/opal.o 00:18:52.844 CXX test/cpp_headers/opal_spec.o 00:18:52.844 CXX test/cpp_headers/pci_ids.o 00:18:52.844 CXX test/cpp_headers/pipe.o 00:18:52.844 CXX test/cpp_headers/queue.o 00:18:52.844 CXX test/cpp_headers/reduce.o 00:18:52.844 CXX test/cpp_headers/rpc.o 00:18:52.844 CXX test/cpp_headers/scheduler.o 00:18:52.844 LINK cuse 00:18:52.844 CXX test/cpp_headers/scsi.o 00:18:52.844 CXX test/cpp_headers/scsi_spec.o 00:18:52.844 CXX test/cpp_headers/sock.o 00:18:53.102 CXX test/cpp_headers/stdinc.o 00:18:53.102 CXX test/cpp_headers/string.o 00:18:53.102 CXX test/cpp_headers/thread.o 00:18:53.102 CXX test/cpp_headers/trace.o 00:18:53.102 CXX test/cpp_headers/trace_parser.o 00:18:53.102 CXX test/cpp_headers/tree.o 00:18:53.102 CXX test/cpp_headers/ublk.o 00:18:53.102 CXX test/cpp_headers/util.o 00:18:53.102 CXX test/cpp_headers/uuid.o 00:18:53.102 CXX test/cpp_headers/version.o 00:18:53.102 CXX test/cpp_headers/vfio_user_pci.o 00:18:53.102 CXX test/cpp_headers/vfio_user_spec.o 00:18:53.102 CXX test/cpp_headers/vhost.o 00:18:53.360 CXX test/cpp_headers/vmd.o 00:18:53.360 CXX test/cpp_headers/xor.o 00:18:53.360 CXX test/cpp_headers/zipf.o 00:18:55.266 LINK esnap 00:18:55.525 00:18:55.525 real 1m17.708s 00:18:55.525 user 8m38.313s 00:18:55.525 sys 1m42.256s 00:18:55.525 15:55:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:18:55.525 15:55:58 -- common/autotest_common.sh@10 -- $ set +x 00:18:55.525 ************************************ 00:18:55.525 END TEST make 00:18:55.525 ************************************ 00:18:55.525 15:55:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:55.525 15:55:58 -- nvmf/common.sh@7 -- # uname -s 00:18:55.525 15:55:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.525 15:55:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.525 15:55:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.525 15:55:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.525 15:55:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.525 15:55:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.525 15:55:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.525 15:55:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.525 15:55:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.525 15:55:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.525 15:55:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:18:55.525 15:55:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:18:55.525 15:55:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.525 15:55:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.525 15:55:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:55.525 15:55:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.525 15:55:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.525 15:55:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.525 15:55:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.525 15:55:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.525 15:55:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.525 15:55:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.525 15:55:58 -- paths/export.sh@5 -- # export PATH 00:18:55.525 15:55:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.525 15:55:58 -- nvmf/common.sh@46 -- # : 0 00:18:55.525 15:55:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:55.525 15:55:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:55.525 15:55:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:55.525 15:55:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.525 15:55:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.525 15:55:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:55.525 15:55:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:55.525 15:55:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:55.525 15:55:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:18:55.525 15:55:58 -- spdk/autotest.sh@32 -- # uname -s 00:18:55.525 15:55:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:18:55.525 15:55:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:18:55.525 15:55:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:18:55.525 15:55:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:18:55.525 15:55:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:18:55.525 15:55:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:18:55.784 15:55:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:18:55.784 15:55:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:18:55.784 15:55:58 -- spdk/autotest.sh@48 -- # udevadm_pid=47907 00:18:55.784 15:55:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:18:55.784 15:55:58 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:18:55.784 15:55:58 -- spdk/autotest.sh@54 -- # echo 47923 00:18:55.784 15:55:58 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:18:55.784 15:55:58 -- spdk/autotest.sh@56 -- # echo 47930 00:18:55.784 15:55:58 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:18:55.784 15:55:58 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:18:55.784 15:55:58 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:18:55.784 15:55:58 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:18:55.784 15:55:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:55.784 15:55:58 -- common/autotest_common.sh@10 -- # set +x 00:18:55.784 15:55:58 -- spdk/autotest.sh@70 -- # create_test_list 00:18:55.784 15:55:58 -- common/autotest_common.sh@736 -- # xtrace_disable 00:18:55.784 15:55:58 -- common/autotest_common.sh@10 -- # set +x 00:18:55.784 15:55:58 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:18:55.784 15:55:58 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:18:55.784 15:55:58 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:18:55.784 15:55:58 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:18:55.784 15:55:58 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:18:55.784 15:55:58 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:18:55.784 15:55:58 -- common/autotest_common.sh@1440 -- # uname 00:18:55.784 15:55:58 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:18:55.784 15:55:58 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:18:55.784 15:55:58 -- common/autotest_common.sh@1460 -- # uname 00:18:55.784 15:55:58 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:18:55.784 15:55:58 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:18:55.784 15:55:58 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:18:55.784 15:55:58 -- spdk/autotest.sh@83 -- # hash lcov 00:18:55.784 15:55:58 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:18:55.784 15:55:58 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:18:55.784 --rc lcov_branch_coverage=1 00:18:55.784 --rc lcov_function_coverage=1 00:18:55.784 --rc genhtml_branch_coverage=1 00:18:55.784 --rc genhtml_function_coverage=1 00:18:55.784 --rc genhtml_legend=1 00:18:55.784 --rc geninfo_all_blocks=1 00:18:55.784 ' 00:18:55.784 15:55:58 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:18:55.784 --rc lcov_branch_coverage=1 00:18:55.784 --rc lcov_function_coverage=1 00:18:55.784 --rc genhtml_branch_coverage=1 00:18:55.784 --rc genhtml_function_coverage=1 00:18:55.784 --rc genhtml_legend=1 00:18:55.784 --rc geninfo_all_blocks=1 00:18:55.784 ' 00:18:55.784 15:55:58 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:18:55.784 --rc lcov_branch_coverage=1 00:18:55.784 --rc lcov_function_coverage=1 00:18:55.784 --rc genhtml_branch_coverage=1 00:18:55.784 --rc genhtml_function_coverage=1 00:18:55.784 --rc genhtml_legend=1 00:18:55.784 --rc geninfo_all_blocks=1 00:18:55.784 --no-external' 00:18:55.784 15:55:58 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:18:55.784 --rc lcov_branch_coverage=1 00:18:55.784 --rc lcov_function_coverage=1 00:18:55.784 --rc genhtml_branch_coverage=1 00:18:55.784 --rc genhtml_function_coverage=1 00:18:55.784 --rc genhtml_legend=1 00:18:55.784 --rc geninfo_all_blocks=1 00:18:55.784 --no-external' 00:18:55.784 15:55:58 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:18:55.784 lcov: LCOV version 1.14 00:18:55.784 15:55:58 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:19:05.759 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:19:05.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:19:05.759 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:19:05.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:19:05.759 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:19:05.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:19:27.694 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:19:27.694 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:19:27.694 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:19:27.695 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:19:27.695 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:19:27.696 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:19:27.696 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:19:30.229 15:56:32 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:19:30.229 15:56:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:30.229 15:56:32 -- common/autotest_common.sh@10 -- # set +x 00:19:30.229 15:56:32 -- spdk/autotest.sh@102 -- # rm -f 00:19:30.229 15:56:32 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:30.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:31.056 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:31.056 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:31.056 15:56:33 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:19:31.056 15:56:33 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:19:31.056 15:56:33 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:19:31.056 15:56:33 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:19:31.056 15:56:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:31.056 15:56:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:19:31.056 15:56:33 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:19:31.056 15:56:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:31.056 15:56:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:31.056 15:56:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:31.056 15:56:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:19:31.056 15:56:33 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:19:31.056 15:56:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:31.056 15:56:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:31.056 15:56:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:31.056 15:56:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:19:31.056 15:56:33 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:19:31.056 15:56:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:19:31.056 15:56:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:31.056 15:56:33 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:31.056 15:56:33 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:19:31.056 15:56:33 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:19:31.056 15:56:33 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:19:31.056 15:56:33 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:31.056 15:56:33 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:19:31.056 15:56:33 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:19:31.056 15:56:33 -- spdk/autotest.sh@121 -- # grep -v p 00:19:31.056 15:56:33 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:19:31.056 15:56:33 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:19:31.056 15:56:33 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:19:31.056 15:56:33 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:19:31.056 15:56:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:19:31.056 No valid GPT data, bailing 00:19:31.056 15:56:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:31.056 15:56:33 -- scripts/common.sh@393 -- # pt= 00:19:31.056 15:56:33 -- scripts/common.sh@394 -- # return 1 00:19:31.056 15:56:33 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:19:31.056 1+0 records in 00:19:31.056 1+0 records out 00:19:31.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447932 s, 234 MB/s 00:19:31.056 15:56:33 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:19:31.056 15:56:33 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:19:31.056 15:56:33 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:19:31.056 15:56:33 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:19:31.056 15:56:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:19:31.056 No valid GPT data, bailing 00:19:31.056 15:56:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:31.056 15:56:33 -- scripts/common.sh@393 -- # pt= 00:19:31.056 15:56:33 -- scripts/common.sh@394 -- # return 1 00:19:31.056 15:56:33 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:19:31.056 1+0 records in 00:19:31.056 1+0 records out 00:19:31.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391033 s, 268 MB/s 00:19:31.056 15:56:33 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:19:31.056 15:56:33 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:19:31.056 15:56:33 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:19:31.056 15:56:33 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:19:31.056 15:56:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:19:31.315 No valid GPT data, bailing 00:19:31.315 15:56:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:31.315 15:56:33 -- scripts/common.sh@393 -- # pt= 00:19:31.315 15:56:33 -- scripts/common.sh@394 -- # return 1 00:19:31.315 15:56:33 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:19:31.315 1+0 records in 00:19:31.315 1+0 records out 00:19:31.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391468 s, 268 MB/s 00:19:31.315 15:56:33 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:19:31.315 15:56:33 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:19:31.315 15:56:33 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:19:31.315 15:56:33 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:19:31.315 15:56:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:19:31.315 No valid GPT data, bailing 00:19:31.315 15:56:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:31.315 15:56:34 -- scripts/common.sh@393 -- # pt= 00:19:31.315 15:56:34 -- scripts/common.sh@394 -- # return 1 00:19:31.315 15:56:34 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:19:31.315 1+0 records in 00:19:31.315 1+0 records out 00:19:31.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587877 s, 178 MB/s 00:19:31.315 15:56:34 -- spdk/autotest.sh@129 -- # sync 00:19:31.315 15:56:34 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:19:31.315 15:56:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:19:31.315 15:56:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:19:33.257 15:56:35 -- spdk/autotest.sh@135 -- # uname -s 00:19:33.257 15:56:35 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:19:33.257 15:56:35 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:19:33.257 15:56:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:33.257 15:56:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:33.257 15:56:35 -- common/autotest_common.sh@10 -- # set +x 00:19:33.257 ************************************ 00:19:33.257 START TEST setup.sh 00:19:33.257 ************************************ 00:19:33.257 15:56:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:19:33.257 * Looking for test storage... 00:19:33.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:19:33.257 15:56:35 -- setup/test-setup.sh@10 -- # uname -s 00:19:33.257 15:56:35 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:19:33.257 15:56:35 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:19:33.257 15:56:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:33.257 15:56:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:33.257 15:56:35 -- common/autotest_common.sh@10 -- # set +x 00:19:33.257 ************************************ 00:19:33.257 START TEST acl 00:19:33.257 ************************************ 00:19:33.257 15:56:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:19:33.257 * Looking for test storage... 00:19:33.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:19:33.257 15:56:35 -- setup/acl.sh@10 -- # get_zoned_devs 00:19:33.257 15:56:35 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:19:33.257 15:56:35 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:19:33.257 15:56:35 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:19:33.257 15:56:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:33.257 15:56:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:19:33.257 15:56:35 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:19:33.257 15:56:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:33.257 15:56:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:33.257 15:56:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:33.257 15:56:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:19:33.257 15:56:35 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:19:33.257 15:56:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:33.257 15:56:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:33.257 15:56:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:33.257 15:56:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:19:33.257 15:56:35 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:19:33.257 15:56:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:19:33.257 15:56:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:33.257 15:56:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:33.257 15:56:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:19:33.257 15:56:35 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:19:33.257 15:56:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:19:33.257 15:56:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:33.257 15:56:35 -- setup/acl.sh@12 -- # devs=() 00:19:33.257 15:56:35 -- setup/acl.sh@12 -- # declare -a devs 00:19:33.257 15:56:35 -- setup/acl.sh@13 -- # drivers=() 00:19:33.257 15:56:35 -- setup/acl.sh@13 -- # declare -A drivers 00:19:33.257 15:56:35 -- setup/acl.sh@51 -- # setup reset 00:19:33.257 15:56:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:19:33.257 15:56:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:33.825 15:56:36 -- setup/acl.sh@52 -- # collect_setup_devs 00:19:33.825 15:56:36 -- setup/acl.sh@16 -- # local dev driver 00:19:33.825 15:56:36 -- setup/acl.sh@15 -- # setup output status 00:19:33.825 15:56:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:19:33.825 15:56:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:33.825 15:56:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:19:34.084 Hugepages 00:19:34.084 node hugesize free / total 00:19:34.084 15:56:36 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:19:34.084 15:56:36 -- setup/acl.sh@19 -- # continue 00:19:34.084 15:56:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:19:34.084 00:19:34.084 Type BDF Vendor Device NUMA Driver Device Block devices 00:19:34.084 15:56:36 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:19:34.084 15:56:36 -- setup/acl.sh@19 -- # continue 00:19:34.084 15:56:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:19:34.084 15:56:36 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:19:34.084 15:56:36 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:19:34.084 15:56:36 -- setup/acl.sh@20 -- # continue 00:19:34.084 15:56:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:19:34.084 15:56:36 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:19:34.084 15:56:36 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:19:34.084 15:56:36 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:19:34.084 15:56:36 -- setup/acl.sh@22 -- # devs+=("$dev") 00:19:34.084 15:56:36 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:19:34.084 15:56:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:19:34.084 15:56:36 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:19:34.084 15:56:36 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:19:34.084 15:56:36 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:19:34.084 15:56:36 -- setup/acl.sh@22 -- # devs+=("$dev") 00:19:34.084 15:56:36 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:19:34.084 15:56:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:19:34.084 15:56:36 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:19:34.084 15:56:36 -- setup/acl.sh@54 -- # run_test denied denied 00:19:34.084 15:56:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:34.084 15:56:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:34.084 15:56:36 -- common/autotest_common.sh@10 -- # set +x 00:19:34.084 ************************************ 00:19:34.084 START TEST denied 00:19:34.084 ************************************ 00:19:34.084 15:56:36 -- common/autotest_common.sh@1104 -- # denied 00:19:34.084 15:56:36 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:19:34.084 15:56:36 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:19:34.084 15:56:36 -- setup/acl.sh@38 -- # setup output config 00:19:34.343 15:56:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:34.343 15:56:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:35.278 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:19:35.278 15:56:37 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:19:35.278 15:56:37 -- setup/acl.sh@28 -- # local dev driver 00:19:35.278 15:56:37 -- setup/acl.sh@30 -- # for dev in "$@" 00:19:35.278 15:56:37 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:19:35.278 15:56:37 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:19:35.278 15:56:37 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:19:35.278 15:56:37 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:19:35.278 15:56:37 -- setup/acl.sh@41 -- # setup reset 00:19:35.278 15:56:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:19:35.278 15:56:37 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:35.537 ************************************ 00:19:35.537 END TEST denied 00:19:35.537 ************************************ 00:19:35.537 00:19:35.537 real 0m1.403s 00:19:35.537 user 0m0.591s 00:19:35.537 sys 0m0.771s 00:19:35.537 15:56:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.537 15:56:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.537 15:56:38 -- setup/acl.sh@55 -- # run_test allowed allowed 00:19:35.537 15:56:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:35.537 15:56:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:35.537 15:56:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.537 ************************************ 00:19:35.537 START TEST allowed 00:19:35.537 ************************************ 00:19:35.537 15:56:38 -- common/autotest_common.sh@1104 -- # allowed 00:19:35.537 15:56:38 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:19:35.537 15:56:38 -- setup/acl.sh@45 -- # setup output config 00:19:35.537 15:56:38 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:19:35.537 15:56:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:35.537 15:56:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:36.471 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:36.471 15:56:39 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:19:36.471 15:56:39 -- setup/acl.sh@28 -- # local dev driver 00:19:36.471 15:56:39 -- setup/acl.sh@30 -- # for dev in "$@" 00:19:36.471 15:56:39 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:19:36.471 15:56:39 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:19:36.471 15:56:39 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:19:36.471 15:56:39 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:19:36.472 15:56:39 -- setup/acl.sh@48 -- # setup reset 00:19:36.472 15:56:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:19:36.472 15:56:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:37.038 00:19:37.038 real 0m1.486s 00:19:37.038 user 0m0.659s 00:19:37.038 sys 0m0.827s 00:19:37.038 15:56:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:37.038 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.038 ************************************ 00:19:37.038 END TEST allowed 00:19:37.038 ************************************ 00:19:37.297 00:19:37.297 real 0m4.079s 00:19:37.297 user 0m1.790s 00:19:37.297 sys 0m2.281s 00:19:37.297 15:56:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:37.297 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.297 ************************************ 00:19:37.297 END TEST acl 00:19:37.297 ************************************ 00:19:37.297 15:56:39 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:19:37.297 15:56:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:37.297 15:56:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:37.297 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:19:37.297 ************************************ 00:19:37.297 START TEST hugepages 00:19:37.297 ************************************ 00:19:37.297 15:56:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:19:37.297 * Looking for test storage... 00:19:37.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:19:37.297 15:56:40 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:19:37.297 15:56:40 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:19:37.297 15:56:40 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:19:37.297 15:56:40 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:19:37.297 15:56:40 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:19:37.297 15:56:40 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:19:37.297 15:56:40 -- setup/common.sh@17 -- # local get=Hugepagesize 00:19:37.297 15:56:40 -- setup/common.sh@18 -- # local node= 00:19:37.298 15:56:40 -- setup/common.sh@19 -- # local var val 00:19:37.298 15:56:40 -- setup/common.sh@20 -- # local mem_f mem 00:19:37.298 15:56:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:37.298 15:56:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:37.298 15:56:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:37.298 15:56:40 -- setup/common.sh@28 -- # mapfile -t mem 00:19:37.298 15:56:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 6053288 kB' 'MemAvailable: 7418776 kB' 'Buffers: 2436 kB' 'Cached: 1580060 kB' 'SwapCached: 0 kB' 'Active: 434876 kB' 'Inactive: 1251164 kB' 'Active(anon): 114032 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 105192 kB' 'Mapped: 48852 kB' 'Shmem: 10488 kB' 'KReclaimable: 61288 kB' 'Slab: 134004 kB' 'SReclaimable: 61288 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6436 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 338576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.298 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.298 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # continue 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:37.299 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:37.299 15:56:40 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:19:37.299 15:56:40 -- setup/common.sh@33 -- # echo 2048 00:19:37.299 15:56:40 -- setup/common.sh@33 -- # return 0 00:19:37.299 15:56:40 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:19:37.299 15:56:40 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:19:37.299 15:56:40 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:19:37.299 15:56:40 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:19:37.299 15:56:40 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:19:37.299 15:56:40 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:19:37.299 15:56:40 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:19:37.299 15:56:40 -- setup/hugepages.sh@207 -- # get_nodes 00:19:37.299 15:56:40 -- setup/hugepages.sh@27 -- # local node 00:19:37.299 15:56:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:19:37.299 15:56:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:19:37.299 15:56:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:19:37.299 15:56:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:19:37.299 15:56:40 -- setup/hugepages.sh@208 -- # clear_hp 00:19:37.299 15:56:40 -- setup/hugepages.sh@37 -- # local node hp 00:19:37.299 15:56:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:19:37.299 15:56:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:19:37.299 15:56:40 -- setup/hugepages.sh@41 -- # echo 0 00:19:37.299 15:56:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:19:37.299 15:56:40 -- setup/hugepages.sh@41 -- # echo 0 00:19:37.299 15:56:40 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:19:37.299 15:56:40 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:19:37.299 15:56:40 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:19:37.299 15:56:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:37.299 15:56:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:37.299 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.299 ************************************ 00:19:37.299 START TEST default_setup 00:19:37.299 ************************************ 00:19:37.299 15:56:40 -- common/autotest_common.sh@1104 -- # default_setup 00:19:37.299 15:56:40 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:19:37.299 15:56:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:19:37.299 15:56:40 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:19:37.299 15:56:40 -- setup/hugepages.sh@51 -- # shift 00:19:37.299 15:56:40 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:19:37.299 15:56:40 -- setup/hugepages.sh@52 -- # local node_ids 00:19:37.299 15:56:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:19:37.299 15:56:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:19:37.299 15:56:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:19:37.299 15:56:40 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:19:37.299 15:56:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:19:37.299 15:56:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:19:37.299 15:56:40 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:19:37.299 15:56:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:19:37.299 15:56:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:19:37.299 15:56:40 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:19:37.299 15:56:40 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:19:37.299 15:56:40 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:19:37.299 15:56:40 -- setup/hugepages.sh@73 -- # return 0 00:19:37.299 15:56:40 -- setup/hugepages.sh@137 -- # setup output 00:19:37.299 15:56:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:37.299 15:56:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:37.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:38.127 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:38.127 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:38.127 15:56:40 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:19:38.127 15:56:40 -- setup/hugepages.sh@89 -- # local node 00:19:38.127 15:56:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:19:38.127 15:56:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:19:38.127 15:56:40 -- setup/hugepages.sh@92 -- # local surp 00:19:38.127 15:56:40 -- setup/hugepages.sh@93 -- # local resv 00:19:38.127 15:56:40 -- setup/hugepages.sh@94 -- # local anon 00:19:38.127 15:56:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:19:38.127 15:56:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:19:38.127 15:56:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:19:38.127 15:56:40 -- setup/common.sh@18 -- # local node= 00:19:38.127 15:56:40 -- setup/common.sh@19 -- # local var val 00:19:38.127 15:56:40 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.127 15:56:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.127 15:56:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:38.127 15:56:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:38.127 15:56:40 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.127 15:56:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8139128 kB' 'MemAvailable: 9504444 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451676 kB' 'Inactive: 1251168 kB' 'Active(anon): 130832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121996 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 133628 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 72692 kB' 'KernelStack: 6384 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.127 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.127 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.128 15:56:40 -- setup/common.sh@33 -- # echo 0 00:19:38.128 15:56:40 -- setup/common.sh@33 -- # return 0 00:19:38.128 15:56:40 -- setup/hugepages.sh@97 -- # anon=0 00:19:38.128 15:56:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:19:38.128 15:56:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:38.128 15:56:40 -- setup/common.sh@18 -- # local node= 00:19:38.128 15:56:40 -- setup/common.sh@19 -- # local var val 00:19:38.128 15:56:40 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.128 15:56:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.128 15:56:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:38.128 15:56:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:38.128 15:56:40 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.128 15:56:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.128 15:56:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8139128 kB' 'MemAvailable: 9504444 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451324 kB' 'Inactive: 1251168 kB' 'Active(anon): 130480 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121624 kB' 'Mapped: 48908 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 133608 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 72672 kB' 'KernelStack: 6320 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.128 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.128 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.129 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.129 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.130 15:56:40 -- setup/common.sh@33 -- # echo 0 00:19:38.130 15:56:40 -- setup/common.sh@33 -- # return 0 00:19:38.130 15:56:40 -- setup/hugepages.sh@99 -- # surp=0 00:19:38.130 15:56:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:19:38.130 15:56:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:19:38.130 15:56:40 -- setup/common.sh@18 -- # local node= 00:19:38.130 15:56:40 -- setup/common.sh@19 -- # local var val 00:19:38.130 15:56:40 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.130 15:56:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.130 15:56:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:38.130 15:56:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:38.130 15:56:40 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.130 15:56:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8139128 kB' 'MemAvailable: 9504444 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451380 kB' 'Inactive: 1251168 kB' 'Active(anon): 130536 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121668 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 133608 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 72672 kB' 'KernelStack: 6336 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.130 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.130 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.131 15:56:40 -- setup/common.sh@33 -- # echo 0 00:19:38.131 15:56:40 -- setup/common.sh@33 -- # return 0 00:19:38.131 15:56:40 -- setup/hugepages.sh@100 -- # resv=0 00:19:38.131 nr_hugepages=1024 00:19:38.131 15:56:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:19:38.131 resv_hugepages=0 00:19:38.131 15:56:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:19:38.131 surplus_hugepages=0 00:19:38.131 15:56:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:19:38.131 anon_hugepages=0 00:19:38.131 15:56:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:19:38.131 15:56:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:19:38.131 15:56:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:19:38.131 15:56:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:19:38.131 15:56:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:19:38.131 15:56:40 -- setup/common.sh@18 -- # local node= 00:19:38.131 15:56:40 -- setup/common.sh@19 -- # local var val 00:19:38.131 15:56:40 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.131 15:56:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.131 15:56:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:38.131 15:56:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:38.131 15:56:40 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.131 15:56:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8139128 kB' 'MemAvailable: 9504444 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451156 kB' 'Inactive: 1251168 kB' 'Active(anon): 130312 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121500 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60936 kB' 'Slab: 133608 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 72672 kB' 'KernelStack: 6336 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.131 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.131 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.132 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.132 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.133 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.133 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.392 15:56:40 -- setup/common.sh@33 -- # echo 1024 00:19:38.392 15:56:40 -- setup/common.sh@33 -- # return 0 00:19:38.392 15:56:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:19:38.392 15:56:40 -- setup/hugepages.sh@112 -- # get_nodes 00:19:38.392 15:56:40 -- setup/hugepages.sh@27 -- # local node 00:19:38.392 15:56:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:19:38.392 15:56:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:19:38.392 15:56:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:19:38.392 15:56:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:19:38.392 15:56:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:19:38.392 15:56:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:19:38.392 15:56:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:19:38.392 15:56:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:38.392 15:56:40 -- setup/common.sh@18 -- # local node=0 00:19:38.392 15:56:40 -- setup/common.sh@19 -- # local var val 00:19:38.392 15:56:40 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.392 15:56:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.392 15:56:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:19:38.392 15:56:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:19:38.392 15:56:40 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.392 15:56:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8139128 kB' 'MemUsed: 4102836 kB' 'SwapCached: 0 kB' 'Active: 451096 kB' 'Inactive: 1251168 kB' 'Active(anon): 130252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1582484 kB' 'Mapped: 48816 kB' 'AnonPages: 121384 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60936 kB' 'Slab: 133608 kB' 'SReclaimable: 60936 kB' 'SUnreclaim: 72672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.392 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.392 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:40 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:40 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:40 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.393 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.393 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.393 15:56:41 -- setup/common.sh@33 -- # echo 0 00:19:38.393 15:56:41 -- setup/common.sh@33 -- # return 0 00:19:38.393 15:56:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:19:38.393 15:56:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:19:38.393 15:56:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:19:38.393 15:56:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:19:38.393 node0=1024 expecting 1024 00:19:38.393 15:56:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:19:38.393 15:56:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:19:38.393 00:19:38.393 real 0m0.927s 00:19:38.393 user 0m0.453s 00:19:38.393 sys 0m0.448s 00:19:38.393 15:56:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.393 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.393 ************************************ 00:19:38.393 END TEST default_setup 00:19:38.393 ************************************ 00:19:38.393 15:56:41 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:19:38.393 15:56:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:38.393 15:56:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:38.393 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.393 ************************************ 00:19:38.393 START TEST per_node_1G_alloc 00:19:38.393 ************************************ 00:19:38.393 15:56:41 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:19:38.393 15:56:41 -- setup/hugepages.sh@143 -- # local IFS=, 00:19:38.393 15:56:41 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:19:38.393 15:56:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:19:38.393 15:56:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:19:38.393 15:56:41 -- setup/hugepages.sh@51 -- # shift 00:19:38.393 15:56:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:19:38.394 15:56:41 -- setup/hugepages.sh@52 -- # local node_ids 00:19:38.394 15:56:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:19:38.394 15:56:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:19:38.394 15:56:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:19:38.394 15:56:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:19:38.394 15:56:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:19:38.394 15:56:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:19:38.394 15:56:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:19:38.394 15:56:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:19:38.394 15:56:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:19:38.394 15:56:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:19:38.394 15:56:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:19:38.394 15:56:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:19:38.394 15:56:41 -- setup/hugepages.sh@73 -- # return 0 00:19:38.394 15:56:41 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:19:38.394 15:56:41 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:19:38.394 15:56:41 -- setup/hugepages.sh@146 -- # setup output 00:19:38.394 15:56:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:38.394 15:56:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:38.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:38.654 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:38.654 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:38.654 15:56:41 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:19:38.654 15:56:41 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:19:38.654 15:56:41 -- setup/hugepages.sh@89 -- # local node 00:19:38.654 15:56:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:19:38.654 15:56:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:19:38.654 15:56:41 -- setup/hugepages.sh@92 -- # local surp 00:19:38.654 15:56:41 -- setup/hugepages.sh@93 -- # local resv 00:19:38.654 15:56:41 -- setup/hugepages.sh@94 -- # local anon 00:19:38.654 15:56:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:19:38.654 15:56:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:19:38.654 15:56:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:19:38.654 15:56:41 -- setup/common.sh@18 -- # local node= 00:19:38.654 15:56:41 -- setup/common.sh@19 -- # local var val 00:19:38.654 15:56:41 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.654 15:56:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.654 15:56:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:38.654 15:56:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:38.654 15:56:41 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.654 15:56:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9187980 kB' 'MemAvailable: 10553304 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451232 kB' 'Inactive: 1251176 kB' 'Active(anon): 130388 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121764 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133584 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72652 kB' 'KernelStack: 6344 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.654 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.654 15:56:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:38.655 15:56:41 -- setup/common.sh@33 -- # echo 0 00:19:38.655 15:56:41 -- setup/common.sh@33 -- # return 0 00:19:38.655 15:56:41 -- setup/hugepages.sh@97 -- # anon=0 00:19:38.655 15:56:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:19:38.655 15:56:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:38.655 15:56:41 -- setup/common.sh@18 -- # local node= 00:19:38.655 15:56:41 -- setup/common.sh@19 -- # local var val 00:19:38.655 15:56:41 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.655 15:56:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.655 15:56:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:38.655 15:56:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:38.655 15:56:41 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.655 15:56:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9187980 kB' 'MemAvailable: 10553304 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451044 kB' 'Inactive: 1251176 kB' 'Active(anon): 130200 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121312 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133580 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72648 kB' 'KernelStack: 6312 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.655 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.655 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.656 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.656 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.656 15:56:41 -- setup/common.sh@33 -- # echo 0 00:19:38.656 15:56:41 -- setup/common.sh@33 -- # return 0 00:19:38.657 15:56:41 -- setup/hugepages.sh@99 -- # surp=0 00:19:38.657 15:56:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:19:38.657 15:56:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:19:38.657 15:56:41 -- setup/common.sh@18 -- # local node= 00:19:38.657 15:56:41 -- setup/common.sh@19 -- # local var val 00:19:38.657 15:56:41 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.657 15:56:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.657 15:56:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:38.657 15:56:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:38.657 15:56:41 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.917 15:56:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.917 15:56:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9187980 kB' 'MemAvailable: 10553304 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451000 kB' 'Inactive: 1251176 kB' 'Active(anon): 130156 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121560 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133580 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72648 kB' 'KernelStack: 6312 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.917 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.917 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.918 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.918 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:38.919 15:56:41 -- setup/common.sh@33 -- # echo 0 00:19:38.919 15:56:41 -- setup/common.sh@33 -- # return 0 00:19:38.919 15:56:41 -- setup/hugepages.sh@100 -- # resv=0 00:19:38.919 nr_hugepages=512 00:19:38.919 resv_hugepages=0 00:19:38.919 surplus_hugepages=0 00:19:38.919 anon_hugepages=0 00:19:38.919 15:56:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:19:38.919 15:56:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:19:38.919 15:56:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:19:38.919 15:56:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:19:38.919 15:56:41 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:19:38.919 15:56:41 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:19:38.919 15:56:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:19:38.919 15:56:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:19:38.919 15:56:41 -- setup/common.sh@18 -- # local node= 00:19:38.919 15:56:41 -- setup/common.sh@19 -- # local var val 00:19:38.919 15:56:41 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.919 15:56:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.919 15:56:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:38.919 15:56:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:38.919 15:56:41 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.919 15:56:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9187980 kB' 'MemAvailable: 10553304 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451004 kB' 'Inactive: 1251176 kB' 'Active(anon): 130160 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121560 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133580 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72648 kB' 'KernelStack: 6368 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.919 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.919 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:38.920 15:56:41 -- setup/common.sh@33 -- # echo 512 00:19:38.920 15:56:41 -- setup/common.sh@33 -- # return 0 00:19:38.920 15:56:41 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:19:38.920 15:56:41 -- setup/hugepages.sh@112 -- # get_nodes 00:19:38.920 15:56:41 -- setup/hugepages.sh@27 -- # local node 00:19:38.920 15:56:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:19:38.920 15:56:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:19:38.920 15:56:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:19:38.920 15:56:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:19:38.920 15:56:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:19:38.920 15:56:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:19:38.920 15:56:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:19:38.920 15:56:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:38.920 15:56:41 -- setup/common.sh@18 -- # local node=0 00:19:38.920 15:56:41 -- setup/common.sh@19 -- # local var val 00:19:38.920 15:56:41 -- setup/common.sh@20 -- # local mem_f mem 00:19:38.920 15:56:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:38.920 15:56:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:19:38.920 15:56:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:19:38.920 15:56:41 -- setup/common.sh@28 -- # mapfile -t mem 00:19:38.920 15:56:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.920 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.920 15:56:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9187980 kB' 'MemUsed: 3053984 kB' 'SwapCached: 0 kB' 'Active: 451008 kB' 'Inactive: 1251176 kB' 'Active(anon): 130164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1582484 kB' 'Mapped: 48816 kB' 'AnonPages: 121556 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60932 kB' 'Slab: 133572 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # continue 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # IFS=': ' 00:19:38.921 15:56:41 -- setup/common.sh@31 -- # read -r var val _ 00:19:38.921 15:56:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:38.921 15:56:41 -- setup/common.sh@33 -- # echo 0 00:19:38.921 15:56:41 -- setup/common.sh@33 -- # return 0 00:19:38.921 15:56:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:19:38.921 15:56:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:19:38.921 node0=512 expecting 512 00:19:38.921 15:56:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:19:38.921 15:56:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:19:38.921 15:56:41 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:19:38.921 15:56:41 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:19:38.921 00:19:38.921 real 0m0.559s 00:19:38.921 user 0m0.279s 00:19:38.921 sys 0m0.285s 00:19:38.921 15:56:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.921 ************************************ 00:19:38.922 END TEST per_node_1G_alloc 00:19:38.922 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.922 ************************************ 00:19:38.922 15:56:41 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:19:38.922 15:56:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:38.922 15:56:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:38.922 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.922 ************************************ 00:19:38.922 START TEST even_2G_alloc 00:19:38.922 ************************************ 00:19:38.922 15:56:41 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:19:38.922 15:56:41 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:19:38.922 15:56:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:19:38.922 15:56:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:19:38.922 15:56:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:19:38.922 15:56:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:19:38.922 15:56:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:19:38.922 15:56:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:19:38.922 15:56:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:19:38.922 15:56:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:19:38.922 15:56:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:19:38.922 15:56:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:19:38.922 15:56:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:19:38.922 15:56:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:19:38.922 15:56:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:19:38.922 15:56:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:19:38.922 15:56:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:19:38.922 15:56:41 -- setup/hugepages.sh@83 -- # : 0 00:19:38.922 15:56:41 -- setup/hugepages.sh@84 -- # : 0 00:19:38.922 15:56:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:19:38.922 15:56:41 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:19:38.922 15:56:41 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:19:38.922 15:56:41 -- setup/hugepages.sh@153 -- # setup output 00:19:38.922 15:56:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:38.922 15:56:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:39.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:39.184 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:39.184 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:39.184 15:56:42 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:19:39.184 15:56:42 -- setup/hugepages.sh@89 -- # local node 00:19:39.184 15:56:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:19:39.184 15:56:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:19:39.184 15:56:42 -- setup/hugepages.sh@92 -- # local surp 00:19:39.184 15:56:42 -- setup/hugepages.sh@93 -- # local resv 00:19:39.184 15:56:42 -- setup/hugepages.sh@94 -- # local anon 00:19:39.184 15:56:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:19:39.184 15:56:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:19:39.184 15:56:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:19:39.184 15:56:42 -- setup/common.sh@18 -- # local node= 00:19:39.184 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.184 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.184 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.184 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:39.184 15:56:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:39.184 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.184 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8141968 kB' 'MemAvailable: 9507292 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451436 kB' 'Inactive: 1251176 kB' 'Active(anon): 130592 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121984 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133588 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72656 kB' 'KernelStack: 6340 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.184 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.184 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.185 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.185 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.464 15:56:42 -- setup/common.sh@33 -- # echo 0 00:19:39.464 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.464 15:56:42 -- setup/hugepages.sh@97 -- # anon=0 00:19:39.464 15:56:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:19:39.464 15:56:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:39.464 15:56:42 -- setup/common.sh@18 -- # local node= 00:19:39.464 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.464 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.464 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.464 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:39.464 15:56:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:39.464 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.464 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8141968 kB' 'MemAvailable: 9507292 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 450976 kB' 'Inactive: 1251176 kB' 'Active(anon): 130132 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121500 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133596 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72664 kB' 'KernelStack: 6336 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.464 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.464 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.465 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.465 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.465 15:56:42 -- setup/common.sh@33 -- # echo 0 00:19:39.465 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.465 15:56:42 -- setup/hugepages.sh@99 -- # surp=0 00:19:39.465 15:56:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:19:39.465 15:56:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:19:39.465 15:56:42 -- setup/common.sh@18 -- # local node= 00:19:39.465 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.465 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.465 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.466 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:39.466 15:56:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:39.466 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.466 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8141968 kB' 'MemAvailable: 9507292 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451008 kB' 'Inactive: 1251176 kB' 'Active(anon): 130164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121568 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133596 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72664 kB' 'KernelStack: 6352 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.466 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.466 15:56:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.467 15:56:42 -- setup/common.sh@33 -- # echo 0 00:19:39.467 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.467 nr_hugepages=1024 00:19:39.467 15:56:42 -- setup/hugepages.sh@100 -- # resv=0 00:19:39.467 15:56:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:19:39.467 resv_hugepages=0 00:19:39.467 15:56:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:19:39.467 surplus_hugepages=0 00:19:39.467 anon_hugepages=0 00:19:39.467 15:56:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:19:39.467 15:56:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:19:39.467 15:56:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:19:39.467 15:56:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:19:39.467 15:56:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:19:39.467 15:56:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:19:39.467 15:56:42 -- setup/common.sh@18 -- # local node= 00:19:39.467 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.467 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.467 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.467 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:39.467 15:56:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:39.467 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.467 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8141968 kB' 'MemAvailable: 9507292 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451028 kB' 'Inactive: 1251176 kB' 'Active(anon): 130184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121564 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133596 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72664 kB' 'KernelStack: 6352 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.467 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.467 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.468 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.468 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.468 15:56:42 -- setup/common.sh@33 -- # echo 1024 00:19:39.468 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.468 15:56:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:19:39.468 15:56:42 -- setup/hugepages.sh@112 -- # get_nodes 00:19:39.468 15:56:42 -- setup/hugepages.sh@27 -- # local node 00:19:39.468 15:56:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:19:39.468 15:56:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:19:39.468 15:56:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:19:39.468 15:56:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:19:39.468 15:56:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:19:39.468 15:56:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:19:39.468 15:56:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:19:39.468 15:56:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:39.468 15:56:42 -- setup/common.sh@18 -- # local node=0 00:19:39.468 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.468 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.468 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.468 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:19:39.468 15:56:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:19:39.469 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.469 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8141968 kB' 'MemUsed: 4099996 kB' 'SwapCached: 0 kB' 'Active: 451008 kB' 'Inactive: 1251176 kB' 'Active(anon): 130164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1582484 kB' 'Mapped: 48816 kB' 'AnonPages: 121580 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60932 kB' 'Slab: 133596 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.469 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.469 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.470 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.470 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.470 15:56:42 -- setup/common.sh@33 -- # echo 0 00:19:39.470 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.470 15:56:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:19:39.470 15:56:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:19:39.470 15:56:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:19:39.470 15:56:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:19:39.470 node0=1024 expecting 1024 00:19:39.470 15:56:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:19:39.470 15:56:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:19:39.470 00:19:39.470 real 0m0.500s 00:19:39.470 user 0m0.248s 00:19:39.470 sys 0m0.261s 00:19:39.470 15:56:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.470 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.470 ************************************ 00:19:39.470 END TEST even_2G_alloc 00:19:39.470 ************************************ 00:19:39.470 15:56:42 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:19:39.470 15:56:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:39.470 15:56:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:39.470 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.470 ************************************ 00:19:39.470 START TEST odd_alloc 00:19:39.470 ************************************ 00:19:39.470 15:56:42 -- common/autotest_common.sh@1104 -- # odd_alloc 00:19:39.470 15:56:42 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:19:39.470 15:56:42 -- setup/hugepages.sh@49 -- # local size=2098176 00:19:39.470 15:56:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:19:39.470 15:56:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:19:39.470 15:56:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:19:39.470 15:56:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:19:39.470 15:56:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:19:39.470 15:56:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:19:39.470 15:56:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:19:39.470 15:56:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:19:39.470 15:56:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:19:39.470 15:56:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:19:39.470 15:56:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:19:39.470 15:56:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:19:39.470 15:56:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:19:39.470 15:56:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:19:39.470 15:56:42 -- setup/hugepages.sh@83 -- # : 0 00:19:39.470 15:56:42 -- setup/hugepages.sh@84 -- # : 0 00:19:39.470 15:56:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:19:39.470 15:56:42 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:19:39.470 15:56:42 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:19:39.470 15:56:42 -- setup/hugepages.sh@160 -- # setup output 00:19:39.470 15:56:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:39.470 15:56:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:39.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:39.728 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:39.728 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:39.728 15:56:42 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:19:39.728 15:56:42 -- setup/hugepages.sh@89 -- # local node 00:19:39.990 15:56:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:19:39.990 15:56:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:19:39.990 15:56:42 -- setup/hugepages.sh@92 -- # local surp 00:19:39.990 15:56:42 -- setup/hugepages.sh@93 -- # local resv 00:19:39.990 15:56:42 -- setup/hugepages.sh@94 -- # local anon 00:19:39.990 15:56:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:19:39.990 15:56:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:19:39.990 15:56:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:19:39.990 15:56:42 -- setup/common.sh@18 -- # local node= 00:19:39.990 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.990 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.990 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.990 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:39.990 15:56:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:39.990 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.990 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8134900 kB' 'MemAvailable: 9500224 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451624 kB' 'Inactive: 1251176 kB' 'Active(anon): 130780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133580 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72648 kB' 'KernelStack: 6376 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.990 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.990 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.991 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:39.991 15:56:42 -- setup/common.sh@33 -- # echo 0 00:19:39.991 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.991 15:56:42 -- setup/hugepages.sh@97 -- # anon=0 00:19:39.991 15:56:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:19:39.991 15:56:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:39.991 15:56:42 -- setup/common.sh@18 -- # local node= 00:19:39.991 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.991 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.991 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.991 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:39.991 15:56:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:39.991 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.991 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.991 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8134900 kB' 'MemAvailable: 9500224 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451100 kB' 'Inactive: 1251176 kB' 'Active(anon): 130256 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121624 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133592 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72660 kB' 'KernelStack: 6296 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.992 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.992 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.993 15:56:42 -- setup/common.sh@33 -- # echo 0 00:19:39.993 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.993 15:56:42 -- setup/hugepages.sh@99 -- # surp=0 00:19:39.993 15:56:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:19:39.993 15:56:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:19:39.993 15:56:42 -- setup/common.sh@18 -- # local node= 00:19:39.993 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.993 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.993 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.993 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:39.993 15:56:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:39.993 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.993 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8134900 kB' 'MemAvailable: 9500224 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451024 kB' 'Inactive: 1251176 kB' 'Active(anon): 130180 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121592 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133600 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72668 kB' 'KernelStack: 6352 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.993 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.993 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.994 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.994 15:56:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:39.995 15:56:42 -- setup/common.sh@33 -- # echo 0 00:19:39.995 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.995 nr_hugepages=1025 00:19:39.995 resv_hugepages=0 00:19:39.995 surplus_hugepages=0 00:19:39.995 anon_hugepages=0 00:19:39.995 15:56:42 -- setup/hugepages.sh@100 -- # resv=0 00:19:39.995 15:56:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:19:39.995 15:56:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:19:39.995 15:56:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:19:39.995 15:56:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:19:39.995 15:56:42 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:19:39.995 15:56:42 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:19:39.995 15:56:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:19:39.995 15:56:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:19:39.995 15:56:42 -- setup/common.sh@18 -- # local node= 00:19:39.995 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.995 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.995 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.995 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:39.995 15:56:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:39.995 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.995 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8134900 kB' 'MemAvailable: 9500224 kB' 'Buffers: 2436 kB' 'Cached: 1580048 kB' 'SwapCached: 0 kB' 'Active: 451036 kB' 'Inactive: 1251176 kB' 'Active(anon): 130192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121572 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133600 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72668 kB' 'KernelStack: 6352 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.995 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.995 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.996 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.996 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:39.996 15:56:42 -- setup/common.sh@33 -- # echo 1025 00:19:39.996 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.996 15:56:42 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:19:39.996 15:56:42 -- setup/hugepages.sh@112 -- # get_nodes 00:19:39.996 15:56:42 -- setup/hugepages.sh@27 -- # local node 00:19:39.996 15:56:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:19:39.996 15:56:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:19:39.996 15:56:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:19:39.996 15:56:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:19:39.997 15:56:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:19:39.997 15:56:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:19:39.997 15:56:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:19:39.997 15:56:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:39.997 15:56:42 -- setup/common.sh@18 -- # local node=0 00:19:39.997 15:56:42 -- setup/common.sh@19 -- # local var val 00:19:39.997 15:56:42 -- setup/common.sh@20 -- # local mem_f mem 00:19:39.997 15:56:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:39.997 15:56:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:19:39.997 15:56:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:19:39.997 15:56:42 -- setup/common.sh@28 -- # mapfile -t mem 00:19:39.997 15:56:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8134900 kB' 'MemUsed: 4107064 kB' 'SwapCached: 0 kB' 'Active: 451228 kB' 'Inactive: 1251176 kB' 'Active(anon): 130384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1582484 kB' 'Mapped: 48816 kB' 'AnonPages: 121536 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60932 kB' 'Slab: 133604 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.997 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.997 15:56:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.998 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.998 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.998 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.998 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.998 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.998 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.998 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.998 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.998 15:56:42 -- setup/common.sh@32 -- # continue 00:19:39.998 15:56:42 -- setup/common.sh@31 -- # IFS=': ' 00:19:39.998 15:56:42 -- setup/common.sh@31 -- # read -r var val _ 00:19:39.998 15:56:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:39.998 15:56:42 -- setup/common.sh@33 -- # echo 0 00:19:39.998 15:56:42 -- setup/common.sh@33 -- # return 0 00:19:39.998 node0=1025 expecting 1025 00:19:39.998 15:56:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:19:39.998 15:56:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:19:39.998 15:56:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:19:39.998 15:56:42 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:19:39.998 15:56:42 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:19:39.998 00:19:39.998 real 0m0.548s 00:19:39.998 user 0m0.262s 00:19:39.998 sys 0m0.294s 00:19:39.998 15:56:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.998 ************************************ 00:19:39.998 END TEST odd_alloc 00:19:39.998 ************************************ 00:19:39.998 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.998 15:56:42 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:19:39.998 15:56:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:39.998 15:56:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:39.998 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.998 ************************************ 00:19:39.998 START TEST custom_alloc 00:19:39.998 ************************************ 00:19:39.998 15:56:42 -- common/autotest_common.sh@1104 -- # custom_alloc 00:19:39.998 15:56:42 -- setup/hugepages.sh@167 -- # local IFS=, 00:19:39.998 15:56:42 -- setup/hugepages.sh@169 -- # local node 00:19:39.998 15:56:42 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:19:39.998 15:56:42 -- setup/hugepages.sh@170 -- # local nodes_hp 00:19:39.998 15:56:42 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:19:39.998 15:56:42 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:19:39.998 15:56:42 -- setup/hugepages.sh@49 -- # local size=1048576 00:19:39.998 15:56:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:19:39.998 15:56:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:19:39.998 15:56:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:19:39.998 15:56:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:19:39.998 15:56:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:19:39.998 15:56:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:19:39.998 15:56:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:19:39.998 15:56:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:19:39.998 15:56:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:19:39.998 15:56:42 -- setup/hugepages.sh@83 -- # : 0 00:19:39.998 15:56:42 -- setup/hugepages.sh@84 -- # : 0 00:19:39.998 15:56:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:19:39.998 15:56:42 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:19:39.998 15:56:42 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:19:39.998 15:56:42 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:19:39.998 15:56:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:19:39.998 15:56:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:19:39.998 15:56:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:19:39.998 15:56:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:19:39.998 15:56:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:19:39.998 15:56:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:19:39.998 15:56:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:19:39.998 15:56:42 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:19:39.998 15:56:42 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:19:39.998 15:56:42 -- setup/hugepages.sh@78 -- # return 0 00:19:39.998 15:56:42 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:19:39.998 15:56:42 -- setup/hugepages.sh@187 -- # setup output 00:19:39.998 15:56:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:39.998 15:56:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:40.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:40.578 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:40.578 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:40.578 15:56:43 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:19:40.578 15:56:43 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:19:40.578 15:56:43 -- setup/hugepages.sh@89 -- # local node 00:19:40.578 15:56:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:19:40.578 15:56:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:19:40.578 15:56:43 -- setup/hugepages.sh@92 -- # local surp 00:19:40.578 15:56:43 -- setup/hugepages.sh@93 -- # local resv 00:19:40.578 15:56:43 -- setup/hugepages.sh@94 -- # local anon 00:19:40.578 15:56:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:19:40.578 15:56:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:19:40.578 15:56:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:19:40.578 15:56:43 -- setup/common.sh@18 -- # local node= 00:19:40.578 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:40.578 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:40.578 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:40.578 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:40.578 15:56:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:40.578 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:40.578 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9186000 kB' 'MemAvailable: 10551328 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 451568 kB' 'Inactive: 1251180 kB' 'Active(anon): 130724 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121832 kB' 'Mapped: 49032 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133632 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72700 kB' 'KernelStack: 6376 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 356492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.578 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.578 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:40.579 15:56:43 -- setup/common.sh@33 -- # echo 0 00:19:40.579 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:40.579 15:56:43 -- setup/hugepages.sh@97 -- # anon=0 00:19:40.579 15:56:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:19:40.579 15:56:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:40.579 15:56:43 -- setup/common.sh@18 -- # local node= 00:19:40.579 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:40.579 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:40.579 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:40.579 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:40.579 15:56:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:40.579 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:40.579 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9186000 kB' 'MemAvailable: 10551328 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 450944 kB' 'Inactive: 1251180 kB' 'Active(anon): 130100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121588 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133652 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72720 kB' 'KernelStack: 6352 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.579 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.579 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.580 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.580 15:56:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.581 15:56:43 -- setup/common.sh@33 -- # echo 0 00:19:40.581 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:40.581 15:56:43 -- setup/hugepages.sh@99 -- # surp=0 00:19:40.581 15:56:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:19:40.581 15:56:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:19:40.581 15:56:43 -- setup/common.sh@18 -- # local node= 00:19:40.581 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:40.581 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:40.581 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:40.581 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:40.581 15:56:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:40.581 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:40.581 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9186000 kB' 'MemAvailable: 10551328 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 450956 kB' 'Inactive: 1251180 kB' 'Active(anon): 130112 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121596 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133644 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72712 kB' 'KernelStack: 6352 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.581 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.581 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:40.582 15:56:43 -- setup/common.sh@33 -- # echo 0 00:19:40.582 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:40.582 nr_hugepages=512 00:19:40.582 resv_hugepages=0 00:19:40.582 surplus_hugepages=0 00:19:40.582 15:56:43 -- setup/hugepages.sh@100 -- # resv=0 00:19:40.582 15:56:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:19:40.582 15:56:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:19:40.582 15:56:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:19:40.582 anon_hugepages=0 00:19:40.582 15:56:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:19:40.582 15:56:43 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:19:40.582 15:56:43 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:19:40.582 15:56:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:19:40.582 15:56:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:19:40.582 15:56:43 -- setup/common.sh@18 -- # local node= 00:19:40.582 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:40.582 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:40.582 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:40.582 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:40.582 15:56:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:40.582 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:40.582 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9186000 kB' 'MemAvailable: 10551328 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 450948 kB' 'Inactive: 1251180 kB' 'Active(anon): 130104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121332 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133632 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72700 kB' 'KernelStack: 6336 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.582 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.582 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.583 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.583 15:56:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:40.584 15:56:43 -- setup/common.sh@33 -- # echo 512 00:19:40.584 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:40.584 15:56:43 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:19:40.584 15:56:43 -- setup/hugepages.sh@112 -- # get_nodes 00:19:40.584 15:56:43 -- setup/hugepages.sh@27 -- # local node 00:19:40.584 15:56:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:19:40.584 15:56:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:19:40.584 15:56:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:19:40.584 15:56:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:19:40.584 15:56:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:19:40.584 15:56:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:19:40.584 15:56:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:19:40.584 15:56:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:40.584 15:56:43 -- setup/common.sh@18 -- # local node=0 00:19:40.584 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:40.584 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:40.584 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:40.584 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:19:40.584 15:56:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:19:40.584 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:40.584 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9186000 kB' 'MemUsed: 3055964 kB' 'SwapCached: 0 kB' 'Active: 451008 kB' 'Inactive: 1251180 kB' 'Active(anon): 130164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1582488 kB' 'Mapped: 48816 kB' 'AnonPages: 121336 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60932 kB' 'Slab: 133624 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.584 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.584 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # continue 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:40.585 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:40.585 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:40.585 15:56:43 -- setup/common.sh@33 -- # echo 0 00:19:40.585 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:40.585 node0=512 expecting 512 00:19:40.585 ************************************ 00:19:40.585 END TEST custom_alloc 00:19:40.585 ************************************ 00:19:40.585 15:56:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:19:40.585 15:56:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:19:40.585 15:56:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:19:40.585 15:56:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:19:40.585 15:56:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:19:40.585 15:56:43 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:19:40.585 00:19:40.585 real 0m0.558s 00:19:40.585 user 0m0.283s 00:19:40.585 sys 0m0.271s 00:19:40.585 15:56:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.585 15:56:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.585 15:56:43 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:19:40.585 15:56:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:40.585 15:56:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:40.585 15:56:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.585 ************************************ 00:19:40.585 START TEST no_shrink_alloc 00:19:40.585 ************************************ 00:19:40.585 15:56:43 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:19:40.585 15:56:43 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:19:40.585 15:56:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:19:40.585 15:56:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:19:40.585 15:56:43 -- setup/hugepages.sh@51 -- # shift 00:19:40.585 15:56:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:19:40.585 15:56:43 -- setup/hugepages.sh@52 -- # local node_ids 00:19:40.585 15:56:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:19:40.585 15:56:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:19:40.585 15:56:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:19:40.585 15:56:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:19:40.585 15:56:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:19:40.585 15:56:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:19:40.585 15:56:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:19:40.585 15:56:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:19:40.585 15:56:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:19:40.585 15:56:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:19:40.585 15:56:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:19:40.585 15:56:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:19:40.585 15:56:43 -- setup/hugepages.sh@73 -- # return 0 00:19:40.585 15:56:43 -- setup/hugepages.sh@198 -- # setup output 00:19:40.585 15:56:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:40.585 15:56:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:41.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.155 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.155 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.155 15:56:43 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:19:41.155 15:56:43 -- setup/hugepages.sh@89 -- # local node 00:19:41.155 15:56:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:19:41.155 15:56:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:19:41.155 15:56:43 -- setup/hugepages.sh@92 -- # local surp 00:19:41.155 15:56:43 -- setup/hugepages.sh@93 -- # local resv 00:19:41.155 15:56:43 -- setup/hugepages.sh@94 -- # local anon 00:19:41.155 15:56:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:19:41.155 15:56:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:19:41.155 15:56:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:19:41.155 15:56:43 -- setup/common.sh@18 -- # local node= 00:19:41.155 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:41.155 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.155 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.155 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:41.155 15:56:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:41.155 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.155 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.155 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.155 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8130632 kB' 'MemAvailable: 9495960 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 451308 kB' 'Inactive: 1251180 kB' 'Active(anon): 130464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121852 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133632 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72700 kB' 'KernelStack: 6344 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.156 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.156 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.157 15:56:43 -- setup/common.sh@33 -- # echo 0 00:19:41.157 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:41.157 15:56:43 -- setup/hugepages.sh@97 -- # anon=0 00:19:41.157 15:56:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:19:41.157 15:56:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:41.157 15:56:43 -- setup/common.sh@18 -- # local node= 00:19:41.157 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:41.157 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.157 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.157 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:41.157 15:56:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:41.157 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.157 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8130888 kB' 'MemAvailable: 9496216 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 450928 kB' 'Inactive: 1251180 kB' 'Active(anon): 130084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121280 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133640 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 6320 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.157 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.157 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.158 15:56:43 -- setup/common.sh@33 -- # echo 0 00:19:41.158 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:41.158 15:56:43 -- setup/hugepages.sh@99 -- # surp=0 00:19:41.158 15:56:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:19:41.158 15:56:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:19:41.158 15:56:43 -- setup/common.sh@18 -- # local node= 00:19:41.158 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:41.158 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.158 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.158 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:41.158 15:56:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:41.158 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.158 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8130888 kB' 'MemAvailable: 9496216 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 450996 kB' 'Inactive: 1251180 kB' 'Active(anon): 130152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121372 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133636 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72704 kB' 'KernelStack: 6352 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.158 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.158 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.159 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.159 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.159 15:56:43 -- setup/common.sh@33 -- # echo 0 00:19:41.159 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:41.160 15:56:43 -- setup/hugepages.sh@100 -- # resv=0 00:19:41.160 nr_hugepages=1024 00:19:41.160 15:56:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:19:41.160 resv_hugepages=0 00:19:41.160 15:56:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:19:41.160 surplus_hugepages=0 00:19:41.160 15:56:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:19:41.160 anon_hugepages=0 00:19:41.160 15:56:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:19:41.160 15:56:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:19:41.160 15:56:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:19:41.160 15:56:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:19:41.160 15:56:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:19:41.160 15:56:43 -- setup/common.sh@18 -- # local node= 00:19:41.160 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:41.160 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.160 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.160 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:41.160 15:56:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:41.160 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.160 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8131160 kB' 'MemAvailable: 9496488 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 450944 kB' 'Inactive: 1251180 kB' 'Active(anon): 130100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121320 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133636 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72704 kB' 'KernelStack: 6336 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.160 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.160 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.161 15:56:43 -- setup/common.sh@33 -- # echo 1024 00:19:41.161 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:41.161 15:56:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:19:41.161 15:56:43 -- setup/hugepages.sh@112 -- # get_nodes 00:19:41.161 15:56:43 -- setup/hugepages.sh@27 -- # local node 00:19:41.161 15:56:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:19:41.161 15:56:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:19:41.161 15:56:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:19:41.161 15:56:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:19:41.161 15:56:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:19:41.161 15:56:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:19:41.161 15:56:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:19:41.161 15:56:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:41.161 15:56:43 -- setup/common.sh@18 -- # local node=0 00:19:41.161 15:56:43 -- setup/common.sh@19 -- # local var val 00:19:41.161 15:56:43 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.161 15:56:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.161 15:56:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:19:41.161 15:56:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:19:41.161 15:56:43 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.161 15:56:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8131160 kB' 'MemUsed: 4110804 kB' 'SwapCached: 0 kB' 'Active: 450900 kB' 'Inactive: 1251180 kB' 'Active(anon): 130056 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1582488 kB' 'Mapped: 48816 kB' 'AnonPages: 121536 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60932 kB' 'Slab: 133636 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.161 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.161 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # continue 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.162 15:56:43 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.162 15:56:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.162 15:56:43 -- setup/common.sh@33 -- # echo 0 00:19:41.162 15:56:43 -- setup/common.sh@33 -- # return 0 00:19:41.162 15:56:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:19:41.162 15:56:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:19:41.162 15:56:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:19:41.162 15:56:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:19:41.162 node0=1024 expecting 1024 00:19:41.162 15:56:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:19:41.162 15:56:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:19:41.162 15:56:43 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:19:41.162 15:56:43 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:19:41.162 15:56:43 -- setup/hugepages.sh@202 -- # setup output 00:19:41.162 15:56:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:41.162 15:56:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:41.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.683 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.683 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:41.683 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:19:41.683 15:56:44 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:19:41.683 15:56:44 -- setup/hugepages.sh@89 -- # local node 00:19:41.683 15:56:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:19:41.683 15:56:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:19:41.683 15:56:44 -- setup/hugepages.sh@92 -- # local surp 00:19:41.683 15:56:44 -- setup/hugepages.sh@93 -- # local resv 00:19:41.683 15:56:44 -- setup/hugepages.sh@94 -- # local anon 00:19:41.683 15:56:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:19:41.683 15:56:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:19:41.683 15:56:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:19:41.683 15:56:44 -- setup/common.sh@18 -- # local node= 00:19:41.683 15:56:44 -- setup/common.sh@19 -- # local var val 00:19:41.683 15:56:44 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.683 15:56:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.683 15:56:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:41.683 15:56:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:41.683 15:56:44 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.683 15:56:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8128604 kB' 'MemAvailable: 9493932 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 451740 kB' 'Inactive: 1251180 kB' 'Active(anon): 130896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122064 kB' 'Mapped: 49324 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133624 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72692 kB' 'KernelStack: 6372 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.683 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.683 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:19:41.684 15:56:44 -- setup/common.sh@33 -- # echo 0 00:19:41.684 15:56:44 -- setup/common.sh@33 -- # return 0 00:19:41.684 15:56:44 -- setup/hugepages.sh@97 -- # anon=0 00:19:41.684 15:56:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:19:41.684 15:56:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:41.684 15:56:44 -- setup/common.sh@18 -- # local node= 00:19:41.684 15:56:44 -- setup/common.sh@19 -- # local var val 00:19:41.684 15:56:44 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.684 15:56:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.684 15:56:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:41.684 15:56:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:41.684 15:56:44 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.684 15:56:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8131612 kB' 'MemAvailable: 9496940 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 446296 kB' 'Inactive: 1251180 kB' 'Active(anon): 125452 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 116620 kB' 'Mapped: 48424 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133612 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72680 kB' 'KernelStack: 6340 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.684 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.684 15:56:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.685 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.685 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.686 15:56:44 -- setup/common.sh@33 -- # echo 0 00:19:41.686 15:56:44 -- setup/common.sh@33 -- # return 0 00:19:41.686 15:56:44 -- setup/hugepages.sh@99 -- # surp=0 00:19:41.686 15:56:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:19:41.686 15:56:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:19:41.686 15:56:44 -- setup/common.sh@18 -- # local node= 00:19:41.686 15:56:44 -- setup/common.sh@19 -- # local var val 00:19:41.686 15:56:44 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.686 15:56:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.686 15:56:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:41.686 15:56:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:41.686 15:56:44 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.686 15:56:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8131360 kB' 'MemAvailable: 9496688 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 445568 kB' 'Inactive: 1251180 kB' 'Active(anon): 124724 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 115856 kB' 'Mapped: 48232 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133592 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72660 kB' 'KernelStack: 6284 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.686 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.686 15:56:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:19:41.687 15:56:44 -- setup/common.sh@33 -- # echo 0 00:19:41.687 15:56:44 -- setup/common.sh@33 -- # return 0 00:19:41.687 15:56:44 -- setup/hugepages.sh@100 -- # resv=0 00:19:41.687 nr_hugepages=1024 00:19:41.687 15:56:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:19:41.687 resv_hugepages=0 00:19:41.687 15:56:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:19:41.687 surplus_hugepages=0 00:19:41.687 15:56:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:19:41.687 anon_hugepages=0 00:19:41.687 15:56:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:19:41.687 15:56:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:19:41.687 15:56:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:19:41.687 15:56:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:19:41.687 15:56:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:19:41.687 15:56:44 -- setup/common.sh@18 -- # local node= 00:19:41.687 15:56:44 -- setup/common.sh@19 -- # local var val 00:19:41.687 15:56:44 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.687 15:56:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.687 15:56:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:19:41.687 15:56:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:19:41.687 15:56:44 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.687 15:56:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8131716 kB' 'MemAvailable: 9497044 kB' 'Buffers: 2436 kB' 'Cached: 1580052 kB' 'SwapCached: 0 kB' 'Active: 445676 kB' 'Inactive: 1251180 kB' 'Active(anon): 124832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 115980 kB' 'Mapped: 48172 kB' 'Shmem: 10464 kB' 'KReclaimable: 60932 kB' 'Slab: 133588 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72656 kB' 'KernelStack: 6284 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.687 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.687 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.688 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.688 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:19:41.689 15:56:44 -- setup/common.sh@33 -- # echo 1024 00:19:41.689 15:56:44 -- setup/common.sh@33 -- # return 0 00:19:41.689 15:56:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:19:41.689 15:56:44 -- setup/hugepages.sh@112 -- # get_nodes 00:19:41.689 15:56:44 -- setup/hugepages.sh@27 -- # local node 00:19:41.689 15:56:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:19:41.689 15:56:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:19:41.689 15:56:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:19:41.689 15:56:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:19:41.689 15:56:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:19:41.689 15:56:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:19:41.689 15:56:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:19:41.689 15:56:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:19:41.689 15:56:44 -- setup/common.sh@18 -- # local node=0 00:19:41.689 15:56:44 -- setup/common.sh@19 -- # local var val 00:19:41.689 15:56:44 -- setup/common.sh@20 -- # local mem_f mem 00:19:41.689 15:56:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:19:41.689 15:56:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:19:41.689 15:56:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:19:41.689 15:56:44 -- setup/common.sh@28 -- # mapfile -t mem 00:19:41.689 15:56:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:19:41.689 15:56:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8131716 kB' 'MemUsed: 4110248 kB' 'SwapCached: 0 kB' 'Active: 445620 kB' 'Inactive: 1251180 kB' 'Active(anon): 124776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1251180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1582488 kB' 'Mapped: 48172 kB' 'AnonPages: 115888 kB' 'Shmem: 10464 kB' 'KernelStack: 6252 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60932 kB' 'Slab: 133588 kB' 'SReclaimable: 60932 kB' 'SUnreclaim: 72656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.689 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.689 15:56:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # continue 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # IFS=': ' 00:19:41.690 15:56:44 -- setup/common.sh@31 -- # read -r var val _ 00:19:41.690 15:56:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:19:41.690 15:56:44 -- setup/common.sh@33 -- # echo 0 00:19:41.690 15:56:44 -- setup/common.sh@33 -- # return 0 00:19:41.690 15:56:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:19:41.690 15:56:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:19:41.690 15:56:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:19:41.690 15:56:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:19:41.690 node0=1024 expecting 1024 00:19:41.690 15:56:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:19:41.690 15:56:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:19:41.690 00:19:41.690 real 0m1.045s 00:19:41.690 user 0m0.545s 00:19:41.690 sys 0m0.564s 00:19:41.690 15:56:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.690 15:56:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.690 ************************************ 00:19:41.690 END TEST no_shrink_alloc 00:19:41.690 ************************************ 00:19:41.690 15:56:44 -- setup/hugepages.sh@217 -- # clear_hp 00:19:41.690 15:56:44 -- setup/hugepages.sh@37 -- # local node hp 00:19:41.690 15:56:44 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:19:41.690 15:56:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:19:41.690 15:56:44 -- setup/hugepages.sh@41 -- # echo 0 00:19:41.690 15:56:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:19:41.690 15:56:44 -- setup/hugepages.sh@41 -- # echo 0 00:19:41.690 15:56:44 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:19:41.690 15:56:44 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:19:41.690 00:19:41.690 real 0m4.562s 00:19:41.690 user 0m2.224s 00:19:41.690 sys 0m2.378s 00:19:41.690 15:56:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.690 15:56:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.690 ************************************ 00:19:41.690 END TEST hugepages 00:19:41.690 ************************************ 00:19:41.949 15:56:44 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:19:41.949 15:56:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:41.949 15:56:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:41.949 15:56:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.949 ************************************ 00:19:41.949 START TEST driver 00:19:41.949 ************************************ 00:19:41.949 15:56:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:19:41.949 * Looking for test storage... 00:19:41.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:19:41.949 15:56:44 -- setup/driver.sh@68 -- # setup reset 00:19:41.949 15:56:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:19:41.949 15:56:44 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:42.524 15:56:45 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:19:42.524 15:56:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:42.524 15:56:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:42.524 15:56:45 -- common/autotest_common.sh@10 -- # set +x 00:19:42.524 ************************************ 00:19:42.524 START TEST guess_driver 00:19:42.524 ************************************ 00:19:42.524 15:56:45 -- common/autotest_common.sh@1104 -- # guess_driver 00:19:42.524 15:56:45 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:19:42.524 15:56:45 -- setup/driver.sh@47 -- # local fail=0 00:19:42.524 15:56:45 -- setup/driver.sh@49 -- # pick_driver 00:19:42.524 15:56:45 -- setup/driver.sh@36 -- # vfio 00:19:42.524 15:56:45 -- setup/driver.sh@21 -- # local iommu_grups 00:19:42.524 15:56:45 -- setup/driver.sh@22 -- # local unsafe_vfio 00:19:42.524 15:56:45 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:19:42.524 15:56:45 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:19:42.524 15:56:45 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:19:42.524 15:56:45 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:19:42.524 15:56:45 -- setup/driver.sh@32 -- # return 1 00:19:42.524 15:56:45 -- setup/driver.sh@38 -- # uio 00:19:42.524 15:56:45 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:19:42.524 15:56:45 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:19:42.524 15:56:45 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:19:42.524 15:56:45 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:19:42.524 15:56:45 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:19:42.524 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:19:42.524 15:56:45 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:19:42.524 15:56:45 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:19:42.524 15:56:45 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:19:42.524 15:56:45 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:19:42.524 Looking for driver=uio_pci_generic 00:19:42.524 15:56:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:19:42.524 15:56:45 -- setup/driver.sh@45 -- # setup output config 00:19:42.524 15:56:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:42.524 15:56:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:43.090 15:56:45 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:19:43.090 15:56:45 -- setup/driver.sh@58 -- # continue 00:19:43.090 15:56:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:19:43.090 15:56:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:19:43.090 15:56:45 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:19:43.090 15:56:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:19:43.349 15:56:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:19:43.349 15:56:46 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:19:43.349 15:56:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:19:43.349 15:56:46 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:19:43.349 15:56:46 -- setup/driver.sh@65 -- # setup reset 00:19:43.349 15:56:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:19:43.349 15:56:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:43.915 00:19:43.915 real 0m1.425s 00:19:43.915 user 0m0.555s 00:19:43.915 sys 0m0.866s 00:19:43.915 15:56:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.915 15:56:46 -- common/autotest_common.sh@10 -- # set +x 00:19:43.915 ************************************ 00:19:43.915 END TEST guess_driver 00:19:43.915 ************************************ 00:19:43.915 00:19:43.915 real 0m2.084s 00:19:43.915 user 0m0.774s 00:19:43.915 sys 0m1.350s 00:19:43.915 15:56:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.915 ************************************ 00:19:43.915 END TEST driver 00:19:43.915 15:56:46 -- common/autotest_common.sh@10 -- # set +x 00:19:43.915 ************************************ 00:19:43.915 15:56:46 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:19:43.915 15:56:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:43.915 15:56:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:43.915 15:56:46 -- common/autotest_common.sh@10 -- # set +x 00:19:43.915 ************************************ 00:19:43.915 START TEST devices 00:19:43.915 ************************************ 00:19:43.915 15:56:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:19:43.915 * Looking for test storage... 00:19:43.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:19:43.915 15:56:46 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:19:43.915 15:56:46 -- setup/devices.sh@192 -- # setup reset 00:19:43.916 15:56:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:19:43.916 15:56:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:44.851 15:56:47 -- setup/devices.sh@194 -- # get_zoned_devs 00:19:44.851 15:56:47 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:19:44.851 15:56:47 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:19:44.851 15:56:47 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:19:44.851 15:56:47 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:44.851 15:56:47 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:19:44.851 15:56:47 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:19:44.851 15:56:47 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:44.851 15:56:47 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:44.851 15:56:47 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:44.851 15:56:47 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:19:44.851 15:56:47 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:19:44.851 15:56:47 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:44.851 15:56:47 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:44.851 15:56:47 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:44.851 15:56:47 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:19:44.851 15:56:47 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:19:44.851 15:56:47 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:19:44.851 15:56:47 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:44.851 15:56:47 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:19:44.851 15:56:47 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:19:44.851 15:56:47 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:19:44.851 15:56:47 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:19:44.851 15:56:47 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:19:44.851 15:56:47 -- setup/devices.sh@196 -- # blocks=() 00:19:44.851 15:56:47 -- setup/devices.sh@196 -- # declare -a blocks 00:19:44.851 15:56:47 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:19:44.851 15:56:47 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:19:44.851 15:56:47 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:19:44.851 15:56:47 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:19:44.851 15:56:47 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:19:44.851 15:56:47 -- setup/devices.sh@201 -- # ctrl=nvme0 00:19:44.851 15:56:47 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:19:44.851 15:56:47 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:19:44.851 15:56:47 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:19:44.851 15:56:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:44.851 No valid GPT data, bailing 00:19:44.851 15:56:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:44.851 15:56:47 -- scripts/common.sh@393 -- # pt= 00:19:44.851 15:56:47 -- scripts/common.sh@394 -- # return 1 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:19:44.851 15:56:47 -- setup/common.sh@76 -- # local dev=nvme0n1 00:19:44.851 15:56:47 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:44.851 15:56:47 -- setup/common.sh@80 -- # echo 5368709120 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:19:44.851 15:56:47 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:19:44.851 15:56:47 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:19:44.851 15:56:47 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:19:44.851 15:56:47 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:19:44.851 15:56:47 -- setup/devices.sh@201 -- # ctrl=nvme1 00:19:44.851 15:56:47 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:19:44.851 15:56:47 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:19:44.851 15:56:47 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:19:44.851 15:56:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:44.851 No valid GPT data, bailing 00:19:44.851 15:56:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:44.851 15:56:47 -- scripts/common.sh@393 -- # pt= 00:19:44.851 15:56:47 -- scripts/common.sh@394 -- # return 1 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:19:44.851 15:56:47 -- setup/common.sh@76 -- # local dev=nvme1n1 00:19:44.851 15:56:47 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:44.851 15:56:47 -- setup/common.sh@80 -- # echo 4294967296 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:19:44.851 15:56:47 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:19:44.851 15:56:47 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:19:44.851 15:56:47 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:19:44.851 15:56:47 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:19:44.851 15:56:47 -- setup/devices.sh@201 -- # ctrl=nvme1 00:19:44.851 15:56:47 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:19:44.851 15:56:47 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:19:44.851 15:56:47 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:19:44.851 15:56:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:19:44.851 No valid GPT data, bailing 00:19:44.851 15:56:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:44.851 15:56:47 -- scripts/common.sh@393 -- # pt= 00:19:44.851 15:56:47 -- scripts/common.sh@394 -- # return 1 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:19:44.851 15:56:47 -- setup/common.sh@76 -- # local dev=nvme1n2 00:19:44.851 15:56:47 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:19:44.851 15:56:47 -- setup/common.sh@80 -- # echo 4294967296 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:19:44.851 15:56:47 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:19:44.851 15:56:47 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:19:44.851 15:56:47 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:19:44.851 15:56:47 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:19:44.851 15:56:47 -- setup/devices.sh@201 -- # ctrl=nvme1 00:19:44.851 15:56:47 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:19:44.851 15:56:47 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:19:44.851 15:56:47 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:19:44.851 15:56:47 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:19:44.851 15:56:47 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:19:45.109 No valid GPT data, bailing 00:19:45.109 15:56:47 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:45.109 15:56:47 -- scripts/common.sh@393 -- # pt= 00:19:45.109 15:56:47 -- scripts/common.sh@394 -- # return 1 00:19:45.109 15:56:47 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:19:45.109 15:56:47 -- setup/common.sh@76 -- # local dev=nvme1n3 00:19:45.109 15:56:47 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:19:45.109 15:56:47 -- setup/common.sh@80 -- # echo 4294967296 00:19:45.109 15:56:47 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:19:45.109 15:56:47 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:19:45.109 15:56:47 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:19:45.109 15:56:47 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:19:45.109 15:56:47 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:19:45.109 15:56:47 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:19:45.109 15:56:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:45.109 15:56:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:45.109 15:56:47 -- common/autotest_common.sh@10 -- # set +x 00:19:45.109 ************************************ 00:19:45.109 START TEST nvme_mount 00:19:45.109 ************************************ 00:19:45.109 15:56:47 -- common/autotest_common.sh@1104 -- # nvme_mount 00:19:45.109 15:56:47 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:19:45.109 15:56:47 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:19:45.109 15:56:47 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:45.109 15:56:47 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:45.110 15:56:47 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:19:45.110 15:56:47 -- setup/common.sh@39 -- # local disk=nvme0n1 00:19:45.110 15:56:47 -- setup/common.sh@40 -- # local part_no=1 00:19:45.110 15:56:47 -- setup/common.sh@41 -- # local size=1073741824 00:19:45.110 15:56:47 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:19:45.110 15:56:47 -- setup/common.sh@44 -- # parts=() 00:19:45.110 15:56:47 -- setup/common.sh@44 -- # local parts 00:19:45.110 15:56:47 -- setup/common.sh@46 -- # (( part = 1 )) 00:19:45.110 15:56:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:45.110 15:56:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:19:45.110 15:56:47 -- setup/common.sh@46 -- # (( part++ )) 00:19:45.110 15:56:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:45.110 15:56:47 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:19:45.110 15:56:47 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:19:45.110 15:56:47 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:19:46.043 Creating new GPT entries in memory. 00:19:46.043 GPT data structures destroyed! You may now partition the disk using fdisk or 00:19:46.043 other utilities. 00:19:46.043 15:56:48 -- setup/common.sh@57 -- # (( part = 1 )) 00:19:46.043 15:56:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:46.043 15:56:48 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:19:46.043 15:56:48 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:19:46.043 15:56:48 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:19:46.978 Creating new GPT entries in memory. 00:19:46.978 The operation has completed successfully. 00:19:46.978 15:56:49 -- setup/common.sh@57 -- # (( part++ )) 00:19:46.978 15:56:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:46.978 15:56:49 -- setup/common.sh@62 -- # wait 52081 00:19:46.978 15:56:49 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:46.979 15:56:49 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:19:46.979 15:56:49 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:46.979 15:56:49 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:19:46.979 15:56:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:19:47.238 15:56:49 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:47.238 15:56:49 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:47.238 15:56:49 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:19:47.238 15:56:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:19:47.238 15:56:49 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:47.238 15:56:49 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:47.238 15:56:49 -- setup/devices.sh@53 -- # local found=0 00:19:47.238 15:56:49 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:47.238 15:56:49 -- setup/devices.sh@56 -- # : 00:19:47.238 15:56:49 -- setup/devices.sh@59 -- # local pci status 00:19:47.238 15:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:47.238 15:56:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:19:47.238 15:56:49 -- setup/devices.sh@47 -- # setup output config 00:19:47.238 15:56:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:47.238 15:56:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:47.238 15:56:50 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:47.238 15:56:50 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:19:47.238 15:56:50 -- setup/devices.sh@63 -- # found=1 00:19:47.238 15:56:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:47.238 15:56:50 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:47.238 15:56:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:47.500 15:56:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:47.500 15:56:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:47.757 15:56:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:47.757 15:56:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:47.757 15:56:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:47.757 15:56:50 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:19:47.757 15:56:50 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:47.757 15:56:50 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:47.757 15:56:50 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:47.757 15:56:50 -- setup/devices.sh@110 -- # cleanup_nvme 00:19:47.757 15:56:50 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:47.757 15:56:50 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:47.757 15:56:50 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:47.757 15:56:50 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:19:47.757 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:19:47.757 15:56:50 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:19:47.757 15:56:50 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:19:48.015 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:19:48.015 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:19:48.015 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:19:48.015 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:19:48.015 15:56:50 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:19:48.015 15:56:50 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:19:48.015 15:56:50 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:48.015 15:56:50 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:19:48.015 15:56:50 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:19:48.015 15:56:50 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:48.015 15:56:50 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:48.015 15:56:50 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:19:48.015 15:56:50 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:19:48.015 15:56:50 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:48.015 15:56:50 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:48.015 15:56:50 -- setup/devices.sh@53 -- # local found=0 00:19:48.015 15:56:50 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:48.015 15:56:50 -- setup/devices.sh@56 -- # : 00:19:48.015 15:56:50 -- setup/devices.sh@59 -- # local pci status 00:19:48.015 15:56:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:48.015 15:56:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:19:48.015 15:56:50 -- setup/devices.sh@47 -- # setup output config 00:19:48.016 15:56:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:48.016 15:56:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:48.273 15:56:50 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:48.273 15:56:50 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:19:48.273 15:56:50 -- setup/devices.sh@63 -- # found=1 00:19:48.273 15:56:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:48.273 15:56:50 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:48.273 15:56:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:48.531 15:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:48.531 15:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:48.531 15:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:48.531 15:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:48.789 15:56:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:48.789 15:56:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:19:48.789 15:56:51 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:48.789 15:56:51 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:48.789 15:56:51 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:19:48.789 15:56:51 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:48.789 15:56:51 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:19:48.789 15:56:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:19:48.789 15:56:51 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:19:48.789 15:56:51 -- setup/devices.sh@50 -- # local mount_point= 00:19:48.789 15:56:51 -- setup/devices.sh@51 -- # local test_file= 00:19:48.789 15:56:51 -- setup/devices.sh@53 -- # local found=0 00:19:48.789 15:56:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:19:48.789 15:56:51 -- setup/devices.sh@59 -- # local pci status 00:19:48.789 15:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:48.789 15:56:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:19:48.789 15:56:51 -- setup/devices.sh@47 -- # setup output config 00:19:48.789 15:56:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:48.789 15:56:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:49.047 15:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:49.047 15:56:51 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:19:49.047 15:56:51 -- setup/devices.sh@63 -- # found=1 00:19:49.047 15:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:49.047 15:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:49.047 15:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:49.304 15:56:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:49.305 15:56:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:49.305 15:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:49.305 15:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:49.305 15:56:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:49.305 15:56:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:19:49.305 15:56:52 -- setup/devices.sh@68 -- # return 0 00:19:49.305 15:56:52 -- setup/devices.sh@128 -- # cleanup_nvme 00:19:49.305 15:56:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:49.305 15:56:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:49.305 15:56:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:19:49.305 15:56:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:19:49.305 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:19:49.305 00:19:49.305 real 0m4.355s 00:19:49.305 user 0m0.937s 00:19:49.305 sys 0m1.134s 00:19:49.305 15:56:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.305 15:56:52 -- common/autotest_common.sh@10 -- # set +x 00:19:49.305 ************************************ 00:19:49.305 END TEST nvme_mount 00:19:49.305 ************************************ 00:19:49.305 15:56:52 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:19:49.305 15:56:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:49.305 15:56:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.305 15:56:52 -- common/autotest_common.sh@10 -- # set +x 00:19:49.562 ************************************ 00:19:49.562 START TEST dm_mount 00:19:49.562 ************************************ 00:19:49.562 15:56:52 -- common/autotest_common.sh@1104 -- # dm_mount 00:19:49.562 15:56:52 -- setup/devices.sh@144 -- # pv=nvme0n1 00:19:49.562 15:56:52 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:19:49.562 15:56:52 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:19:49.562 15:56:52 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:19:49.562 15:56:52 -- setup/common.sh@39 -- # local disk=nvme0n1 00:19:49.562 15:56:52 -- setup/common.sh@40 -- # local part_no=2 00:19:49.562 15:56:52 -- setup/common.sh@41 -- # local size=1073741824 00:19:49.562 15:56:52 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:19:49.562 15:56:52 -- setup/common.sh@44 -- # parts=() 00:19:49.562 15:56:52 -- setup/common.sh@44 -- # local parts 00:19:49.562 15:56:52 -- setup/common.sh@46 -- # (( part = 1 )) 00:19:49.562 15:56:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:49.562 15:56:52 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:19:49.562 15:56:52 -- setup/common.sh@46 -- # (( part++ )) 00:19:49.562 15:56:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:49.562 15:56:52 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:19:49.562 15:56:52 -- setup/common.sh@46 -- # (( part++ )) 00:19:49.562 15:56:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:49.562 15:56:52 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:19:49.562 15:56:52 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:19:49.562 15:56:52 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:19:50.495 Creating new GPT entries in memory. 00:19:50.495 GPT data structures destroyed! You may now partition the disk using fdisk or 00:19:50.495 other utilities. 00:19:50.495 15:56:53 -- setup/common.sh@57 -- # (( part = 1 )) 00:19:50.495 15:56:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:50.495 15:56:53 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:19:50.495 15:56:53 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:19:50.495 15:56:53 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:19:51.454 Creating new GPT entries in memory. 00:19:51.454 The operation has completed successfully. 00:19:51.454 15:56:54 -- setup/common.sh@57 -- # (( part++ )) 00:19:51.454 15:56:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:51.454 15:56:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:19:51.454 15:56:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:19:51.454 15:56:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:19:52.389 The operation has completed successfully. 00:19:52.389 15:56:55 -- setup/common.sh@57 -- # (( part++ )) 00:19:52.389 15:56:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:52.389 15:56:55 -- setup/common.sh@62 -- # wait 52541 00:19:52.648 15:56:55 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:19:52.648 15:56:55 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:52.648 15:56:55 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:19:52.648 15:56:55 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:19:52.648 15:56:55 -- setup/devices.sh@160 -- # for t in {1..5} 00:19:52.648 15:56:55 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:19:52.648 15:56:55 -- setup/devices.sh@161 -- # break 00:19:52.648 15:56:55 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:19:52.648 15:56:55 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:19:52.648 15:56:55 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:19:52.648 15:56:55 -- setup/devices.sh@166 -- # dm=dm-0 00:19:52.648 15:56:55 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:19:52.648 15:56:55 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:19:52.648 15:56:55 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:52.648 15:56:55 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:19:52.648 15:56:55 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:52.648 15:56:55 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:19:52.648 15:56:55 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:19:52.648 15:56:55 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:52.648 15:56:55 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:19:52.648 15:56:55 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:19:52.648 15:56:55 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:19:52.648 15:56:55 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:52.648 15:56:55 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:19:52.648 15:56:55 -- setup/devices.sh@53 -- # local found=0 00:19:52.648 15:56:55 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:19:52.648 15:56:55 -- setup/devices.sh@56 -- # : 00:19:52.648 15:56:55 -- setup/devices.sh@59 -- # local pci status 00:19:52.648 15:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:52.648 15:56:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:19:52.648 15:56:55 -- setup/devices.sh@47 -- # setup output config 00:19:52.648 15:56:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:52.648 15:56:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:52.649 15:56:55 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:52.649 15:56:55 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:19:52.649 15:56:55 -- setup/devices.sh@63 -- # found=1 00:19:52.649 15:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:52.649 15:56:55 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:52.649 15:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:53.216 15:56:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:53.216 15:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:53.216 15:56:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:53.216 15:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:53.216 15:56:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:53.216 15:56:55 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:19:53.216 15:56:55 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:53.216 15:56:55 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:19:53.216 15:56:55 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:19:53.216 15:56:55 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:53.216 15:56:55 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:19:53.216 15:56:55 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:19:53.216 15:56:55 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:19:53.216 15:56:55 -- setup/devices.sh@50 -- # local mount_point= 00:19:53.216 15:56:55 -- setup/devices.sh@51 -- # local test_file= 00:19:53.216 15:56:55 -- setup/devices.sh@53 -- # local found=0 00:19:53.216 15:56:55 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:19:53.216 15:56:55 -- setup/devices.sh@59 -- # local pci status 00:19:53.216 15:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:53.216 15:56:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:19:53.216 15:56:55 -- setup/devices.sh@47 -- # setup output config 00:19:53.216 15:56:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:19:53.216 15:56:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:19:53.475 15:56:56 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:53.475 15:56:56 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:19:53.475 15:56:56 -- setup/devices.sh@63 -- # found=1 00:19:53.475 15:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:53.475 15:56:56 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:53.475 15:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:53.733 15:56:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:53.734 15:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:53.734 15:56:56 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:19:53.734 15:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:53.734 15:56:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:53.734 15:56:56 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:19:53.734 15:56:56 -- setup/devices.sh@68 -- # return 0 00:19:53.734 15:56:56 -- setup/devices.sh@187 -- # cleanup_dm 00:19:53.734 15:56:56 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:53.734 15:56:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:19:53.734 15:56:56 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:19:53.992 15:56:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:53.992 15:56:56 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:19:53.992 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:19:53.992 15:56:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:19:53.992 15:56:56 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:19:53.992 00:19:53.992 real 0m4.456s 00:19:53.992 user 0m0.667s 00:19:53.992 sys 0m0.738s 00:19:53.992 15:56:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.992 ************************************ 00:19:53.992 END TEST dm_mount 00:19:53.992 ************************************ 00:19:53.992 15:56:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.992 15:56:56 -- setup/devices.sh@1 -- # cleanup 00:19:53.992 15:56:56 -- setup/devices.sh@11 -- # cleanup_nvme 00:19:53.992 15:56:56 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:19:53.992 15:56:56 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:53.992 15:56:56 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:19:53.992 15:56:56 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:19:53.992 15:56:56 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:19:54.251 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:19:54.251 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:19:54.251 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:19:54.251 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:19:54.252 15:56:56 -- setup/devices.sh@12 -- # cleanup_dm 00:19:54.252 15:56:56 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:19:54.252 15:56:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:19:54.252 15:56:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:54.252 15:56:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:19:54.252 15:56:56 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:19:54.252 15:56:56 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:19:54.252 00:19:54.252 real 0m10.271s 00:19:54.252 user 0m2.233s 00:19:54.252 sys 0m2.431s 00:19:54.252 15:56:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.252 ************************************ 00:19:54.252 END TEST devices 00:19:54.252 15:56:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.252 ************************************ 00:19:54.252 00:19:54.252 real 0m21.253s 00:19:54.252 user 0m7.115s 00:19:54.252 sys 0m8.594s 00:19:54.252 15:56:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.252 ************************************ 00:19:54.252 15:56:56 -- common/autotest_common.sh@10 -- # set +x 00:19:54.252 END TEST setup.sh 00:19:54.252 ************************************ 00:19:54.252 15:56:57 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:19:54.510 Hugepages 00:19:54.510 node hugesize free / total 00:19:54.510 node0 1048576kB 0 / 0 00:19:54.510 node0 2048kB 2048 / 2048 00:19:54.510 00:19:54.510 Type BDF Vendor Device NUMA Driver Device Block devices 00:19:54.510 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:19:54.510 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:19:54.510 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:19:54.510 15:56:57 -- spdk/autotest.sh@141 -- # uname -s 00:19:54.510 15:56:57 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:19:54.510 15:56:57 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:19:54.510 15:56:57 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:55.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:55.445 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:55.445 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:55.445 15:56:58 -- common/autotest_common.sh@1517 -- # sleep 1 00:19:56.405 15:56:59 -- common/autotest_common.sh@1518 -- # bdfs=() 00:19:56.405 15:56:59 -- common/autotest_common.sh@1518 -- # local bdfs 00:19:56.405 15:56:59 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:19:56.405 15:56:59 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:19:56.405 15:56:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:56.405 15:56:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:19:56.405 15:56:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:56.405 15:56:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:56.405 15:56:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:56.664 15:56:59 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:19:56.664 15:56:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:56.664 15:56:59 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:56.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:56.923 Waiting for block devices as requested 00:19:56.923 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:57.181 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:57.181 15:56:59 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:19:57.181 15:56:59 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:19:57.181 15:56:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:19:57.181 15:56:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:19:57.181 15:56:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:19:57.181 15:56:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:19:57.181 15:56:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:19:57.181 15:56:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:19:57.181 15:56:59 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:19:57.181 15:56:59 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:19:57.181 15:56:59 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:19:57.181 15:56:59 -- common/autotest_common.sh@1530 -- # grep oacs 00:19:57.181 15:56:59 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:19:57.181 15:56:59 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:19:57.181 15:56:59 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:19:57.181 15:56:59 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:19:57.181 15:56:59 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:19:57.181 15:56:59 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:19:57.181 15:56:59 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:19:57.181 15:56:59 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:19:57.181 15:56:59 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:19:57.181 15:56:59 -- common/autotest_common.sh@1542 -- # continue 00:19:57.181 15:56:59 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:19:57.181 15:56:59 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:19:57.181 15:56:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:19:57.181 15:56:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:19:57.181 15:56:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:19:57.181 15:56:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:19:57.181 15:56:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:19:57.181 15:56:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:19:57.181 15:56:59 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:19:57.181 15:56:59 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:19:57.181 15:56:59 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:19:57.181 15:56:59 -- common/autotest_common.sh@1530 -- # grep oacs 00:19:57.181 15:56:59 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:19:57.181 15:56:59 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:19:57.181 15:56:59 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:19:57.181 15:56:59 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:19:57.181 15:56:59 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:19:57.181 15:56:59 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:19:57.181 15:56:59 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:19:57.181 15:56:59 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:19:57.181 15:56:59 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:19:57.181 15:56:59 -- common/autotest_common.sh@1542 -- # continue 00:19:57.181 15:56:59 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:19:57.181 15:56:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:57.181 15:56:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.181 15:56:59 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:19:57.182 15:56:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:57.182 15:56:59 -- common/autotest_common.sh@10 -- # set +x 00:19:57.182 15:56:59 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:57.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:58.007 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:58.007 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:58.007 15:57:00 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:19:58.007 15:57:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:58.007 15:57:00 -- common/autotest_common.sh@10 -- # set +x 00:19:58.007 15:57:00 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:19:58.007 15:57:00 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:19:58.007 15:57:00 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:19:58.007 15:57:00 -- common/autotest_common.sh@1562 -- # bdfs=() 00:19:58.007 15:57:00 -- common/autotest_common.sh@1562 -- # local bdfs 00:19:58.007 15:57:00 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:19:58.007 15:57:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:58.007 15:57:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:19:58.007 15:57:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:58.007 15:57:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:58.007 15:57:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:58.007 15:57:00 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:19:58.007 15:57:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:58.007 15:57:00 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:19:58.267 15:57:00 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:19:58.267 15:57:00 -- common/autotest_common.sh@1565 -- # device=0x0010 00:19:58.267 15:57:00 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:19:58.267 15:57:00 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:19:58.267 15:57:00 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:19:58.267 15:57:00 -- common/autotest_common.sh@1565 -- # device=0x0010 00:19:58.267 15:57:00 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:19:58.267 15:57:00 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:19:58.267 15:57:00 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:19:58.267 15:57:00 -- common/autotest_common.sh@1578 -- # return 0 00:19:58.267 15:57:00 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:19:58.267 15:57:00 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:19:58.267 15:57:00 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:19:58.267 15:57:00 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:19:58.267 15:57:00 -- spdk/autotest.sh@173 -- # timing_enter lib 00:19:58.267 15:57:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:58.267 15:57:00 -- common/autotest_common.sh@10 -- # set +x 00:19:58.267 15:57:00 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:19:58.267 15:57:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:58.267 15:57:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:58.267 15:57:00 -- common/autotest_common.sh@10 -- # set +x 00:19:58.267 ************************************ 00:19:58.267 START TEST env 00:19:58.267 ************************************ 00:19:58.267 15:57:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:19:58.267 * Looking for test storage... 00:19:58.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:19:58.267 15:57:00 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:19:58.267 15:57:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:58.267 15:57:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:58.267 15:57:00 -- common/autotest_common.sh@10 -- # set +x 00:19:58.267 ************************************ 00:19:58.267 START TEST env_memory 00:19:58.267 ************************************ 00:19:58.267 15:57:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:19:58.267 00:19:58.267 00:19:58.267 CUnit - A unit testing framework for C - Version 2.1-3 00:19:58.267 http://cunit.sourceforge.net/ 00:19:58.267 00:19:58.267 00:19:58.267 Suite: memory 00:19:58.267 Test: alloc and free memory map ...[2024-07-22 15:57:01.032309] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:19:58.267 passed 00:19:58.267 Test: mem map translation ...[2024-07-22 15:57:01.063151] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:19:58.267 [2024-07-22 15:57:01.063210] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:19:58.267 [2024-07-22 15:57:01.063266] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:19:58.267 [2024-07-22 15:57:01.063277] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:19:58.267 passed 00:19:58.267 Test: mem map registration ...[2024-07-22 15:57:01.127550] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:19:58.267 [2024-07-22 15:57:01.127621] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:19:58.528 passed 00:19:58.528 Test: mem map adjacent registrations ...passed 00:19:58.528 00:19:58.528 Run Summary: Type Total Ran Passed Failed Inactive 00:19:58.528 suites 1 1 n/a 0 0 00:19:58.528 tests 4 4 4 0 0 00:19:58.528 asserts 152 152 152 0 n/a 00:19:58.528 00:19:58.528 Elapsed time = 0.214 seconds 00:19:58.528 00:19:58.528 real 0m0.230s 00:19:58.528 user 0m0.210s 00:19:58.528 sys 0m0.018s 00:19:58.528 15:57:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.528 15:57:01 -- common/autotest_common.sh@10 -- # set +x 00:19:58.528 ************************************ 00:19:58.528 END TEST env_memory 00:19:58.528 ************************************ 00:19:58.528 15:57:01 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:19:58.528 15:57:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:58.528 15:57:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:58.528 15:57:01 -- common/autotest_common.sh@10 -- # set +x 00:19:58.528 ************************************ 00:19:58.528 START TEST env_vtophys 00:19:58.528 ************************************ 00:19:58.528 15:57:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:19:58.528 EAL: lib.eal log level changed from notice to debug 00:19:58.528 EAL: Detected lcore 0 as core 0 on socket 0 00:19:58.528 EAL: Detected lcore 1 as core 0 on socket 0 00:19:58.528 EAL: Detected lcore 2 as core 0 on socket 0 00:19:58.528 EAL: Detected lcore 3 as core 0 on socket 0 00:19:58.528 EAL: Detected lcore 4 as core 0 on socket 0 00:19:58.528 EAL: Detected lcore 5 as core 0 on socket 0 00:19:58.528 EAL: Detected lcore 6 as core 0 on socket 0 00:19:58.528 EAL: Detected lcore 7 as core 0 on socket 0 00:19:58.528 EAL: Detected lcore 8 as core 0 on socket 0 00:19:58.528 EAL: Detected lcore 9 as core 0 on socket 0 00:19:58.528 EAL: Maximum logical cores by configuration: 128 00:19:58.528 EAL: Detected CPU lcores: 10 00:19:58.528 EAL: Detected NUMA nodes: 1 00:19:58.528 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:19:58.528 EAL: Detected shared linkage of DPDK 00:19:58.528 EAL: No shared files mode enabled, IPC will be disabled 00:19:58.528 EAL: Selected IOVA mode 'PA' 00:19:58.528 EAL: Probing VFIO support... 00:19:58.528 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:19:58.528 EAL: VFIO modules not loaded, skipping VFIO support... 00:19:58.528 EAL: Ask a virtual area of 0x2e000 bytes 00:19:58.528 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:19:58.528 EAL: Setting up physically contiguous memory... 00:19:58.528 EAL: Setting maximum number of open files to 524288 00:19:58.528 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:19:58.528 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:19:58.528 EAL: Ask a virtual area of 0x61000 bytes 00:19:58.528 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:19:58.528 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:19:58.528 EAL: Ask a virtual area of 0x400000000 bytes 00:19:58.528 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:19:58.528 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:19:58.528 EAL: Ask a virtual area of 0x61000 bytes 00:19:58.528 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:19:58.528 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:19:58.528 EAL: Ask a virtual area of 0x400000000 bytes 00:19:58.528 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:19:58.528 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:19:58.528 EAL: Ask a virtual area of 0x61000 bytes 00:19:58.528 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:19:58.528 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:19:58.528 EAL: Ask a virtual area of 0x400000000 bytes 00:19:58.528 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:19:58.528 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:19:58.528 EAL: Ask a virtual area of 0x61000 bytes 00:19:58.528 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:19:58.528 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:19:58.528 EAL: Ask a virtual area of 0x400000000 bytes 00:19:58.528 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:19:58.528 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:19:58.528 EAL: Hugepages will be freed exactly as allocated. 00:19:58.528 EAL: No shared files mode enabled, IPC is disabled 00:19:58.528 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: TSC frequency is ~2200000 KHz 00:19:58.787 EAL: Main lcore 0 is ready (tid=7feaeb42ba00;cpuset=[0]) 00:19:58.787 EAL: Trying to obtain current memory policy. 00:19:58.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:58.787 EAL: Restoring previous memory policy: 0 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.787 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: Heap on socket 0 was expanded by 2MB 00:19:58.787 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:19:58.787 EAL: No PCI address specified using 'addr=' in: bus=pci 00:19:58.787 EAL: Mem event callback 'spdk:(nil)' registered 00:19:58.787 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:19:58.787 00:19:58.787 00:19:58.787 CUnit - A unit testing framework for C - Version 2.1-3 00:19:58.787 http://cunit.sourceforge.net/ 00:19:58.787 00:19:58.787 00:19:58.787 Suite: components_suite 00:19:58.787 Test: vtophys_malloc_test ...passed 00:19:58.787 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:19:58.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:58.787 EAL: Restoring previous memory policy: 4 00:19:58.787 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.787 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: Heap on socket 0 was expanded by 4MB 00:19:58.787 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.787 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: Heap on socket 0 was shrunk by 4MB 00:19:58.787 EAL: Trying to obtain current memory policy. 00:19:58.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:58.787 EAL: Restoring previous memory policy: 4 00:19:58.787 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.787 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: Heap on socket 0 was expanded by 6MB 00:19:58.787 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.787 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: Heap on socket 0 was shrunk by 6MB 00:19:58.787 EAL: Trying to obtain current memory policy. 00:19:58.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:58.787 EAL: Restoring previous memory policy: 4 00:19:58.787 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.787 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: Heap on socket 0 was expanded by 10MB 00:19:58.787 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.787 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: Heap on socket 0 was shrunk by 10MB 00:19:58.787 EAL: Trying to obtain current memory policy. 00:19:58.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:58.787 EAL: Restoring previous memory policy: 4 00:19:58.787 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.787 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: Heap on socket 0 was expanded by 18MB 00:19:58.787 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.787 EAL: No shared files mode enabled, IPC is disabled 00:19:58.787 EAL: Heap on socket 0 was shrunk by 18MB 00:19:58.787 EAL: Trying to obtain current memory policy. 00:19:58.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:58.787 EAL: Restoring previous memory policy: 4 00:19:58.787 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.787 EAL: request: mp_malloc_sync 00:19:58.788 EAL: No shared files mode enabled, IPC is disabled 00:19:58.788 EAL: Heap on socket 0 was expanded by 34MB 00:19:58.788 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.788 EAL: request: mp_malloc_sync 00:19:58.788 EAL: No shared files mode enabled, IPC is disabled 00:19:58.788 EAL: Heap on socket 0 was shrunk by 34MB 00:19:58.788 EAL: Trying to obtain current memory policy. 00:19:58.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:58.788 EAL: Restoring previous memory policy: 4 00:19:58.788 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.788 EAL: request: mp_malloc_sync 00:19:58.788 EAL: No shared files mode enabled, IPC is disabled 00:19:58.788 EAL: Heap on socket 0 was expanded by 66MB 00:19:58.788 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.788 EAL: request: mp_malloc_sync 00:19:58.788 EAL: No shared files mode enabled, IPC is disabled 00:19:58.788 EAL: Heap on socket 0 was shrunk by 66MB 00:19:58.788 EAL: Trying to obtain current memory policy. 00:19:58.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:58.788 EAL: Restoring previous memory policy: 4 00:19:58.788 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.788 EAL: request: mp_malloc_sync 00:19:58.788 EAL: No shared files mode enabled, IPC is disabled 00:19:58.788 EAL: Heap on socket 0 was expanded by 130MB 00:19:58.788 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.788 EAL: request: mp_malloc_sync 00:19:58.788 EAL: No shared files mode enabled, IPC is disabled 00:19:58.788 EAL: Heap on socket 0 was shrunk by 130MB 00:19:58.788 EAL: Trying to obtain current memory policy. 00:19:58.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:58.788 EAL: Restoring previous memory policy: 4 00:19:58.788 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.788 EAL: request: mp_malloc_sync 00:19:58.788 EAL: No shared files mode enabled, IPC is disabled 00:19:58.788 EAL: Heap on socket 0 was expanded by 258MB 00:19:58.788 EAL: Calling mem event callback 'spdk:(nil)' 00:19:58.788 EAL: request: mp_malloc_sync 00:19:58.788 EAL: No shared files mode enabled, IPC is disabled 00:19:58.788 EAL: Heap on socket 0 was shrunk by 258MB 00:19:58.788 EAL: Trying to obtain current memory policy. 00:19:58.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:59.046 EAL: Restoring previous memory policy: 4 00:19:59.046 EAL: Calling mem event callback 'spdk:(nil)' 00:19:59.046 EAL: request: mp_malloc_sync 00:19:59.046 EAL: No shared files mode enabled, IPC is disabled 00:19:59.046 EAL: Heap on socket 0 was expanded by 514MB 00:19:59.046 EAL: Calling mem event callback 'spdk:(nil)' 00:19:59.046 EAL: request: mp_malloc_sync 00:19:59.046 EAL: No shared files mode enabled, IPC is disabled 00:19:59.046 EAL: Heap on socket 0 was shrunk by 514MB 00:19:59.046 EAL: Trying to obtain current memory policy. 00:19:59.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:59.306 EAL: Restoring previous memory policy: 4 00:19:59.306 EAL: Calling mem event callback 'spdk:(nil)' 00:19:59.306 EAL: request: mp_malloc_sync 00:19:59.306 EAL: No shared files mode enabled, IPC is disabled 00:19:59.306 EAL: Heap on socket 0 was expanded by 1026MB 00:19:59.306 EAL: Calling mem event callback 'spdk:(nil)' 00:19:59.306 passed 00:19:59.306 00:19:59.306 Run Summary: Type Total Ran Passed Failed Inactive 00:19:59.306 suites 1 1 n/a 0 0 00:19:59.306 tests 2 2 2 0 0 00:19:59.306 asserts 5148 5148 5148 0 n/a 00:19:59.306 00:19:59.306 Elapsed time = 0.664 seconds 00:19:59.306 EAL: request: mp_malloc_sync 00:19:59.306 EAL: No shared files mode enabled, IPC is disabled 00:19:59.306 EAL: Heap on socket 0 was shrunk by 1026MB 00:19:59.306 EAL: Calling mem event callback 'spdk:(nil)' 00:19:59.306 EAL: request: mp_malloc_sync 00:19:59.306 EAL: No shared files mode enabled, IPC is disabled 00:19:59.306 EAL: Heap on socket 0 was shrunk by 2MB 00:19:59.306 EAL: No shared files mode enabled, IPC is disabled 00:19:59.306 EAL: No shared files mode enabled, IPC is disabled 00:19:59.306 EAL: No shared files mode enabled, IPC is disabled 00:19:59.306 00:19:59.306 real 0m0.855s 00:19:59.306 user 0m0.435s 00:19:59.306 sys 0m0.291s 00:19:59.306 15:57:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.306 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:19:59.306 ************************************ 00:19:59.306 END TEST env_vtophys 00:19:59.306 ************************************ 00:19:59.306 15:57:02 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:19:59.306 15:57:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:59.306 15:57:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:59.306 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:19:59.306 ************************************ 00:19:59.306 START TEST env_pci 00:19:59.306 ************************************ 00:19:59.306 15:57:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:19:59.565 00:19:59.565 00:19:59.565 CUnit - A unit testing framework for C - Version 2.1-3 00:19:59.565 http://cunit.sourceforge.net/ 00:19:59.565 00:19:59.565 00:19:59.565 Suite: pci 00:19:59.565 Test: pci_hook ...[2024-07-22 15:57:02.184609] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53662 has claimed it 00:19:59.565 passed 00:19:59.565 00:19:59.565 Run Summary: Type Total Ran Passed Failed Inactive 00:19:59.565 suites 1 1 n/a 0 0 00:19:59.565 tests 1 1 1 0 0 00:19:59.565 asserts 25 25 25 0 n/a 00:19:59.565 00:19:59.565 Elapsed time = 0.002EAL: Cannot find device (10000:00:01.0) 00:19:59.565 EAL: Failed to attach device on primary process 00:19:59.565 seconds 00:19:59.565 00:19:59.565 real 0m0.021s 00:19:59.565 user 0m0.009s 00:19:59.565 sys 0m0.011s 00:19:59.565 15:57:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.565 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:19:59.565 ************************************ 00:19:59.565 END TEST env_pci 00:19:59.565 ************************************ 00:19:59.565 15:57:02 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:19:59.565 15:57:02 -- env/env.sh@15 -- # uname 00:19:59.565 15:57:02 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:19:59.565 15:57:02 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:19:59.565 15:57:02 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:19:59.565 15:57:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:59.565 15:57:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:59.565 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:19:59.565 ************************************ 00:19:59.565 START TEST env_dpdk_post_init 00:19:59.565 ************************************ 00:19:59.565 15:57:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:19:59.565 EAL: Detected CPU lcores: 10 00:19:59.565 EAL: Detected NUMA nodes: 1 00:19:59.565 EAL: Detected shared linkage of DPDK 00:19:59.565 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:19:59.565 EAL: Selected IOVA mode 'PA' 00:19:59.565 TELEMETRY: No legacy callbacks, legacy socket not created 00:19:59.565 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:19:59.565 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:19:59.565 Starting DPDK initialization... 00:19:59.565 Starting SPDK post initialization... 00:19:59.565 SPDK NVMe probe 00:19:59.565 Attaching to 0000:00:06.0 00:19:59.565 Attaching to 0000:00:07.0 00:19:59.565 Attached to 0000:00:06.0 00:19:59.565 Attached to 0000:00:07.0 00:19:59.565 Cleaning up... 00:19:59.565 00:19:59.565 real 0m0.183s 00:19:59.565 user 0m0.055s 00:19:59.565 sys 0m0.027s 00:19:59.565 15:57:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.565 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:19:59.565 ************************************ 00:19:59.565 END TEST env_dpdk_post_init 00:19:59.565 ************************************ 00:19:59.824 15:57:02 -- env/env.sh@26 -- # uname 00:19:59.824 15:57:02 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:19:59.824 15:57:02 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:19:59.824 15:57:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:59.824 15:57:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:59.824 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:19:59.824 ************************************ 00:19:59.824 START TEST env_mem_callbacks 00:19:59.824 ************************************ 00:19:59.824 15:57:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:19:59.824 EAL: Detected CPU lcores: 10 00:19:59.824 EAL: Detected NUMA nodes: 1 00:19:59.824 EAL: Detected shared linkage of DPDK 00:19:59.824 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:19:59.824 EAL: Selected IOVA mode 'PA' 00:19:59.824 TELEMETRY: No legacy callbacks, legacy socket not created 00:19:59.824 00:19:59.824 00:19:59.824 CUnit - A unit testing framework for C - Version 2.1-3 00:19:59.824 http://cunit.sourceforge.net/ 00:19:59.824 00:19:59.824 00:19:59.824 Suite: memory 00:19:59.824 Test: test ... 00:19:59.824 register 0x200000200000 2097152 00:19:59.824 malloc 3145728 00:19:59.824 register 0x200000400000 4194304 00:19:59.824 buf 0x200000500000 len 3145728 PASSED 00:19:59.824 malloc 64 00:19:59.824 buf 0x2000004fff40 len 64 PASSED 00:19:59.824 malloc 4194304 00:19:59.824 register 0x200000800000 6291456 00:19:59.824 buf 0x200000a00000 len 4194304 PASSED 00:19:59.824 free 0x200000500000 3145728 00:19:59.824 free 0x2000004fff40 64 00:19:59.824 unregister 0x200000400000 4194304 PASSED 00:19:59.824 free 0x200000a00000 4194304 00:19:59.824 unregister 0x200000800000 6291456 PASSED 00:19:59.824 malloc 8388608 00:19:59.824 register 0x200000400000 10485760 00:19:59.824 buf 0x200000600000 len 8388608 PASSED 00:19:59.824 free 0x200000600000 8388608 00:19:59.824 unregister 0x200000400000 10485760 PASSED 00:19:59.824 passed 00:19:59.824 00:19:59.824 Run Summary: Type Total Ran Passed Failed Inactive 00:19:59.824 suites 1 1 n/a 0 0 00:19:59.824 tests 1 1 1 0 0 00:19:59.824 asserts 15 15 15 0 n/a 00:19:59.824 00:19:59.824 Elapsed time = 0.006 seconds 00:19:59.824 00:19:59.824 real 0m0.140s 00:19:59.824 user 0m0.018s 00:19:59.824 sys 0m0.021s 00:19:59.824 15:57:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.824 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:19:59.824 ************************************ 00:19:59.824 END TEST env_mem_callbacks 00:19:59.824 ************************************ 00:19:59.824 00:19:59.824 real 0m1.754s 00:19:59.824 user 0m0.828s 00:19:59.824 sys 0m0.582s 00:19:59.824 15:57:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.824 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:19:59.824 ************************************ 00:19:59.824 END TEST env 00:19:59.824 ************************************ 00:19:59.824 15:57:02 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:19:59.824 15:57:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:59.824 15:57:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:59.824 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:20:00.108 ************************************ 00:20:00.108 START TEST rpc 00:20:00.108 ************************************ 00:20:00.108 15:57:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:20:00.108 * Looking for test storage... 00:20:00.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:20:00.108 15:57:02 -- rpc/rpc.sh@65 -- # spdk_pid=53776 00:20:00.108 15:57:02 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:20:00.108 15:57:02 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:00.108 15:57:02 -- rpc/rpc.sh@67 -- # waitforlisten 53776 00:20:00.108 15:57:02 -- common/autotest_common.sh@819 -- # '[' -z 53776 ']' 00:20:00.108 15:57:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.108 15:57:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.108 15:57:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.108 15:57:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.108 15:57:02 -- common/autotest_common.sh@10 -- # set +x 00:20:00.108 [2024-07-22 15:57:02.822271] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:00.109 [2024-07-22 15:57:02.822372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53776 ] 00:20:00.109 [2024-07-22 15:57:02.959912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.366 [2024-07-22 15:57:03.028786] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:00.366 [2024-07-22 15:57:03.028961] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:20:00.366 [2024-07-22 15:57:03.028987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53776' to capture a snapshot of events at runtime. 00:20:00.366 [2024-07-22 15:57:03.028999] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53776 for offline analysis/debug. 00:20:00.366 [2024-07-22 15:57:03.029036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.300 15:57:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.300 15:57:03 -- common/autotest_common.sh@852 -- # return 0 00:20:01.300 15:57:03 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:20:01.300 15:57:03 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:20:01.300 15:57:03 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:20:01.300 15:57:03 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:20:01.300 15:57:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:01.300 15:57:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:01.300 15:57:03 -- common/autotest_common.sh@10 -- # set +x 00:20:01.300 ************************************ 00:20:01.300 START TEST rpc_integrity 00:20:01.300 ************************************ 00:20:01.300 15:57:03 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:20:01.301 15:57:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:01.301 15:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.301 15:57:03 -- common/autotest_common.sh@10 -- # set +x 00:20:01.301 15:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.301 15:57:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:20:01.301 15:57:03 -- rpc/rpc.sh@13 -- # jq length 00:20:01.301 15:57:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:20:01.301 15:57:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:20:01.301 15:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.301 15:57:03 -- common/autotest_common.sh@10 -- # set +x 00:20:01.301 15:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.301 15:57:03 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:20:01.301 15:57:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:20:01.301 15:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.301 15:57:03 -- common/autotest_common.sh@10 -- # set +x 00:20:01.301 15:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.301 15:57:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:20:01.301 { 00:20:01.301 "name": "Malloc0", 00:20:01.301 "aliases": [ 00:20:01.301 "dd888712-9713-4422-8842-9fc1cb260d02" 00:20:01.301 ], 00:20:01.301 "product_name": "Malloc disk", 00:20:01.301 "block_size": 512, 00:20:01.301 "num_blocks": 16384, 00:20:01.301 "uuid": "dd888712-9713-4422-8842-9fc1cb260d02", 00:20:01.301 "assigned_rate_limits": { 00:20:01.301 "rw_ios_per_sec": 0, 00:20:01.301 "rw_mbytes_per_sec": 0, 00:20:01.301 "r_mbytes_per_sec": 0, 00:20:01.301 "w_mbytes_per_sec": 0 00:20:01.301 }, 00:20:01.301 "claimed": false, 00:20:01.301 "zoned": false, 00:20:01.301 "supported_io_types": { 00:20:01.301 "read": true, 00:20:01.301 "write": true, 00:20:01.301 "unmap": true, 00:20:01.301 "write_zeroes": true, 00:20:01.301 "flush": true, 00:20:01.301 "reset": true, 00:20:01.301 "compare": false, 00:20:01.301 "compare_and_write": false, 00:20:01.301 "abort": true, 00:20:01.301 "nvme_admin": false, 00:20:01.301 "nvme_io": false 00:20:01.301 }, 00:20:01.301 "memory_domains": [ 00:20:01.301 { 00:20:01.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.301 "dma_device_type": 2 00:20:01.301 } 00:20:01.301 ], 00:20:01.301 "driver_specific": {} 00:20:01.301 } 00:20:01.301 ]' 00:20:01.301 15:57:03 -- rpc/rpc.sh@17 -- # jq length 00:20:01.301 15:57:04 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:20:01.301 15:57:04 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:20:01.301 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.301 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.301 [2024-07-22 15:57:04.036001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:20:01.301 [2024-07-22 15:57:04.036064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.301 [2024-07-22 15:57:04.036085] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fb23f0 00:20:01.301 [2024-07-22 15:57:04.036095] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.301 [2024-07-22 15:57:04.037657] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.301 [2024-07-22 15:57:04.037696] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:20:01.301 Passthru0 00:20:01.301 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.301 15:57:04 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:20:01.301 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.301 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.301 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.301 15:57:04 -- rpc/rpc.sh@20 -- # bdevs='[ 00:20:01.301 { 00:20:01.301 "name": "Malloc0", 00:20:01.301 "aliases": [ 00:20:01.301 "dd888712-9713-4422-8842-9fc1cb260d02" 00:20:01.301 ], 00:20:01.301 "product_name": "Malloc disk", 00:20:01.301 "block_size": 512, 00:20:01.301 "num_blocks": 16384, 00:20:01.301 "uuid": "dd888712-9713-4422-8842-9fc1cb260d02", 00:20:01.301 "assigned_rate_limits": { 00:20:01.301 "rw_ios_per_sec": 0, 00:20:01.301 "rw_mbytes_per_sec": 0, 00:20:01.301 "r_mbytes_per_sec": 0, 00:20:01.301 "w_mbytes_per_sec": 0 00:20:01.301 }, 00:20:01.301 "claimed": true, 00:20:01.301 "claim_type": "exclusive_write", 00:20:01.301 "zoned": false, 00:20:01.301 "supported_io_types": { 00:20:01.301 "read": true, 00:20:01.301 "write": true, 00:20:01.301 "unmap": true, 00:20:01.301 "write_zeroes": true, 00:20:01.301 "flush": true, 00:20:01.301 "reset": true, 00:20:01.301 "compare": false, 00:20:01.301 "compare_and_write": false, 00:20:01.301 "abort": true, 00:20:01.301 "nvme_admin": false, 00:20:01.301 "nvme_io": false 00:20:01.301 }, 00:20:01.301 "memory_domains": [ 00:20:01.301 { 00:20:01.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.301 "dma_device_type": 2 00:20:01.301 } 00:20:01.301 ], 00:20:01.301 "driver_specific": {} 00:20:01.301 }, 00:20:01.301 { 00:20:01.301 "name": "Passthru0", 00:20:01.301 "aliases": [ 00:20:01.301 "96104204-4fca-52fe-b4c2-c4df15aa0a62" 00:20:01.301 ], 00:20:01.301 "product_name": "passthru", 00:20:01.301 "block_size": 512, 00:20:01.301 "num_blocks": 16384, 00:20:01.301 "uuid": "96104204-4fca-52fe-b4c2-c4df15aa0a62", 00:20:01.301 "assigned_rate_limits": { 00:20:01.301 "rw_ios_per_sec": 0, 00:20:01.301 "rw_mbytes_per_sec": 0, 00:20:01.301 "r_mbytes_per_sec": 0, 00:20:01.301 "w_mbytes_per_sec": 0 00:20:01.301 }, 00:20:01.301 "claimed": false, 00:20:01.301 "zoned": false, 00:20:01.301 "supported_io_types": { 00:20:01.301 "read": true, 00:20:01.301 "write": true, 00:20:01.301 "unmap": true, 00:20:01.301 "write_zeroes": true, 00:20:01.301 "flush": true, 00:20:01.301 "reset": true, 00:20:01.301 "compare": false, 00:20:01.301 "compare_and_write": false, 00:20:01.301 "abort": true, 00:20:01.301 "nvme_admin": false, 00:20:01.301 "nvme_io": false 00:20:01.301 }, 00:20:01.301 "memory_domains": [ 00:20:01.301 { 00:20:01.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.301 "dma_device_type": 2 00:20:01.301 } 00:20:01.301 ], 00:20:01.301 "driver_specific": { 00:20:01.301 "passthru": { 00:20:01.301 "name": "Passthru0", 00:20:01.301 "base_bdev_name": "Malloc0" 00:20:01.301 } 00:20:01.301 } 00:20:01.301 } 00:20:01.301 ]' 00:20:01.301 15:57:04 -- rpc/rpc.sh@21 -- # jq length 00:20:01.301 15:57:04 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:20:01.301 15:57:04 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:20:01.301 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.301 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.301 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.301 15:57:04 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:01.301 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.301 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.301 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.301 15:57:04 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:01.301 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.301 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.301 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.301 15:57:04 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:20:01.301 15:57:04 -- rpc/rpc.sh@26 -- # jq length 00:20:01.560 15:57:04 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:20:01.560 00:20:01.560 real 0m0.284s 00:20:01.560 user 0m0.189s 00:20:01.560 sys 0m0.029s 00:20:01.560 15:57:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.560 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.560 ************************************ 00:20:01.560 END TEST rpc_integrity 00:20:01.560 ************************************ 00:20:01.560 15:57:04 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:20:01.560 15:57:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:01.560 15:57:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:01.560 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.560 ************************************ 00:20:01.560 START TEST rpc_plugins 00:20:01.560 ************************************ 00:20:01.560 15:57:04 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:20:01.560 15:57:04 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:20:01.560 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.560 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.560 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.560 15:57:04 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:20:01.560 15:57:04 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:20:01.560 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.560 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.560 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.560 15:57:04 -- rpc/rpc.sh@31 -- # bdevs='[ 00:20:01.560 { 00:20:01.560 "name": "Malloc1", 00:20:01.560 "aliases": [ 00:20:01.560 "f3641051-80ad-4d45-93af-f9a84f79c137" 00:20:01.560 ], 00:20:01.560 "product_name": "Malloc disk", 00:20:01.560 "block_size": 4096, 00:20:01.560 "num_blocks": 256, 00:20:01.560 "uuid": "f3641051-80ad-4d45-93af-f9a84f79c137", 00:20:01.560 "assigned_rate_limits": { 00:20:01.560 "rw_ios_per_sec": 0, 00:20:01.560 "rw_mbytes_per_sec": 0, 00:20:01.560 "r_mbytes_per_sec": 0, 00:20:01.560 "w_mbytes_per_sec": 0 00:20:01.560 }, 00:20:01.560 "claimed": false, 00:20:01.560 "zoned": false, 00:20:01.560 "supported_io_types": { 00:20:01.560 "read": true, 00:20:01.560 "write": true, 00:20:01.560 "unmap": true, 00:20:01.560 "write_zeroes": true, 00:20:01.560 "flush": true, 00:20:01.560 "reset": true, 00:20:01.560 "compare": false, 00:20:01.560 "compare_and_write": false, 00:20:01.560 "abort": true, 00:20:01.560 "nvme_admin": false, 00:20:01.560 "nvme_io": false 00:20:01.560 }, 00:20:01.560 "memory_domains": [ 00:20:01.560 { 00:20:01.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.560 "dma_device_type": 2 00:20:01.560 } 00:20:01.560 ], 00:20:01.560 "driver_specific": {} 00:20:01.560 } 00:20:01.560 ]' 00:20:01.560 15:57:04 -- rpc/rpc.sh@32 -- # jq length 00:20:01.561 15:57:04 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:20:01.561 15:57:04 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:20:01.561 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.561 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.561 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.561 15:57:04 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:20:01.561 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.561 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.561 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.561 15:57:04 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:20:01.561 15:57:04 -- rpc/rpc.sh@36 -- # jq length 00:20:01.561 15:57:04 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:20:01.561 00:20:01.561 real 0m0.147s 00:20:01.561 user 0m0.096s 00:20:01.561 sys 0m0.016s 00:20:01.561 15:57:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.561 ************************************ 00:20:01.561 END TEST rpc_plugins 00:20:01.561 ************************************ 00:20:01.561 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.561 15:57:04 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:20:01.561 15:57:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:01.561 15:57:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:01.561 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.561 ************************************ 00:20:01.561 START TEST rpc_trace_cmd_test 00:20:01.561 ************************************ 00:20:01.561 15:57:04 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:20:01.561 15:57:04 -- rpc/rpc.sh@40 -- # local info 00:20:01.561 15:57:04 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:20:01.561 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:01.561 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.819 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:01.819 15:57:04 -- rpc/rpc.sh@42 -- # info='{ 00:20:01.819 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53776", 00:20:01.819 "tpoint_group_mask": "0x8", 00:20:01.819 "iscsi_conn": { 00:20:01.819 "mask": "0x2", 00:20:01.819 "tpoint_mask": "0x0" 00:20:01.819 }, 00:20:01.819 "scsi": { 00:20:01.819 "mask": "0x4", 00:20:01.819 "tpoint_mask": "0x0" 00:20:01.819 }, 00:20:01.819 "bdev": { 00:20:01.819 "mask": "0x8", 00:20:01.819 "tpoint_mask": "0xffffffffffffffff" 00:20:01.819 }, 00:20:01.819 "nvmf_rdma": { 00:20:01.819 "mask": "0x10", 00:20:01.819 "tpoint_mask": "0x0" 00:20:01.819 }, 00:20:01.819 "nvmf_tcp": { 00:20:01.819 "mask": "0x20", 00:20:01.819 "tpoint_mask": "0x0" 00:20:01.819 }, 00:20:01.819 "ftl": { 00:20:01.819 "mask": "0x40", 00:20:01.819 "tpoint_mask": "0x0" 00:20:01.819 }, 00:20:01.819 "blobfs": { 00:20:01.819 "mask": "0x80", 00:20:01.819 "tpoint_mask": "0x0" 00:20:01.819 }, 00:20:01.819 "dsa": { 00:20:01.820 "mask": "0x200", 00:20:01.820 "tpoint_mask": "0x0" 00:20:01.820 }, 00:20:01.820 "thread": { 00:20:01.820 "mask": "0x400", 00:20:01.820 "tpoint_mask": "0x0" 00:20:01.820 }, 00:20:01.820 "nvme_pcie": { 00:20:01.820 "mask": "0x800", 00:20:01.820 "tpoint_mask": "0x0" 00:20:01.820 }, 00:20:01.820 "iaa": { 00:20:01.820 "mask": "0x1000", 00:20:01.820 "tpoint_mask": "0x0" 00:20:01.820 }, 00:20:01.820 "nvme_tcp": { 00:20:01.820 "mask": "0x2000", 00:20:01.820 "tpoint_mask": "0x0" 00:20:01.820 }, 00:20:01.820 "bdev_nvme": { 00:20:01.820 "mask": "0x4000", 00:20:01.820 "tpoint_mask": "0x0" 00:20:01.820 } 00:20:01.820 }' 00:20:01.820 15:57:04 -- rpc/rpc.sh@43 -- # jq length 00:20:01.820 15:57:04 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:20:01.820 15:57:04 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:20:01.820 15:57:04 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:20:01.820 15:57:04 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:20:01.820 15:57:04 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:20:01.820 15:57:04 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:20:01.820 15:57:04 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:20:01.820 15:57:04 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:20:01.820 15:57:04 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:20:01.820 00:20:01.820 real 0m0.248s 00:20:01.820 user 0m0.221s 00:20:01.820 sys 0m0.020s 00:20:01.820 15:57:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.820 ************************************ 00:20:01.820 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.820 END TEST rpc_trace_cmd_test 00:20:01.820 ************************************ 00:20:02.079 15:57:04 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:20:02.079 15:57:04 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:20:02.079 15:57:04 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:20:02.079 15:57:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:02.079 15:57:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.079 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.079 ************************************ 00:20:02.079 START TEST rpc_daemon_integrity 00:20:02.079 ************************************ 00:20:02.079 15:57:04 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:20:02.079 15:57:04 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:02.079 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.079 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.079 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.079 15:57:04 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:20:02.079 15:57:04 -- rpc/rpc.sh@13 -- # jq length 00:20:02.079 15:57:04 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:20:02.079 15:57:04 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:20:02.079 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.079 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.079 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.079 15:57:04 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:20:02.079 15:57:04 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:20:02.079 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.079 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.079 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.079 15:57:04 -- rpc/rpc.sh@16 -- # bdevs='[ 00:20:02.079 { 00:20:02.079 "name": "Malloc2", 00:20:02.079 "aliases": [ 00:20:02.079 "3c0a087f-d559-46d2-90b0-c393f31a42a5" 00:20:02.079 ], 00:20:02.079 "product_name": "Malloc disk", 00:20:02.079 "block_size": 512, 00:20:02.079 "num_blocks": 16384, 00:20:02.079 "uuid": "3c0a087f-d559-46d2-90b0-c393f31a42a5", 00:20:02.079 "assigned_rate_limits": { 00:20:02.079 "rw_ios_per_sec": 0, 00:20:02.079 "rw_mbytes_per_sec": 0, 00:20:02.079 "r_mbytes_per_sec": 0, 00:20:02.079 "w_mbytes_per_sec": 0 00:20:02.079 }, 00:20:02.079 "claimed": false, 00:20:02.079 "zoned": false, 00:20:02.079 "supported_io_types": { 00:20:02.079 "read": true, 00:20:02.079 "write": true, 00:20:02.079 "unmap": true, 00:20:02.079 "write_zeroes": true, 00:20:02.079 "flush": true, 00:20:02.079 "reset": true, 00:20:02.079 "compare": false, 00:20:02.079 "compare_and_write": false, 00:20:02.079 "abort": true, 00:20:02.079 "nvme_admin": false, 00:20:02.079 "nvme_io": false 00:20:02.079 }, 00:20:02.079 "memory_domains": [ 00:20:02.079 { 00:20:02.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.079 "dma_device_type": 2 00:20:02.079 } 00:20:02.079 ], 00:20:02.079 "driver_specific": {} 00:20:02.079 } 00:20:02.079 ]' 00:20:02.079 15:57:04 -- rpc/rpc.sh@17 -- # jq length 00:20:02.079 15:57:04 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:20:02.079 15:57:04 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:20:02.079 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.079 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.079 [2024-07-22 15:57:04.840289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:20:02.079 [2024-07-22 15:57:04.840347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.079 [2024-07-22 15:57:04.840371] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fb1bd0 00:20:02.079 [2024-07-22 15:57:04.840381] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.079 [2024-07-22 15:57:04.841839] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.079 [2024-07-22 15:57:04.841874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:20:02.079 Passthru0 00:20:02.079 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.079 15:57:04 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:20:02.079 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.079 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.079 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.079 15:57:04 -- rpc/rpc.sh@20 -- # bdevs='[ 00:20:02.079 { 00:20:02.079 "name": "Malloc2", 00:20:02.079 "aliases": [ 00:20:02.079 "3c0a087f-d559-46d2-90b0-c393f31a42a5" 00:20:02.079 ], 00:20:02.079 "product_name": "Malloc disk", 00:20:02.079 "block_size": 512, 00:20:02.079 "num_blocks": 16384, 00:20:02.079 "uuid": "3c0a087f-d559-46d2-90b0-c393f31a42a5", 00:20:02.079 "assigned_rate_limits": { 00:20:02.079 "rw_ios_per_sec": 0, 00:20:02.079 "rw_mbytes_per_sec": 0, 00:20:02.079 "r_mbytes_per_sec": 0, 00:20:02.079 "w_mbytes_per_sec": 0 00:20:02.079 }, 00:20:02.079 "claimed": true, 00:20:02.079 "claim_type": "exclusive_write", 00:20:02.079 "zoned": false, 00:20:02.079 "supported_io_types": { 00:20:02.079 "read": true, 00:20:02.079 "write": true, 00:20:02.079 "unmap": true, 00:20:02.079 "write_zeroes": true, 00:20:02.079 "flush": true, 00:20:02.079 "reset": true, 00:20:02.079 "compare": false, 00:20:02.079 "compare_and_write": false, 00:20:02.079 "abort": true, 00:20:02.080 "nvme_admin": false, 00:20:02.080 "nvme_io": false 00:20:02.080 }, 00:20:02.080 "memory_domains": [ 00:20:02.080 { 00:20:02.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.080 "dma_device_type": 2 00:20:02.080 } 00:20:02.080 ], 00:20:02.080 "driver_specific": {} 00:20:02.080 }, 00:20:02.080 { 00:20:02.080 "name": "Passthru0", 00:20:02.080 "aliases": [ 00:20:02.080 "118d5410-4f25-5902-96e0-9441a244bdf1" 00:20:02.080 ], 00:20:02.080 "product_name": "passthru", 00:20:02.080 "block_size": 512, 00:20:02.080 "num_blocks": 16384, 00:20:02.080 "uuid": "118d5410-4f25-5902-96e0-9441a244bdf1", 00:20:02.080 "assigned_rate_limits": { 00:20:02.080 "rw_ios_per_sec": 0, 00:20:02.080 "rw_mbytes_per_sec": 0, 00:20:02.080 "r_mbytes_per_sec": 0, 00:20:02.080 "w_mbytes_per_sec": 0 00:20:02.080 }, 00:20:02.080 "claimed": false, 00:20:02.080 "zoned": false, 00:20:02.080 "supported_io_types": { 00:20:02.080 "read": true, 00:20:02.080 "write": true, 00:20:02.080 "unmap": true, 00:20:02.080 "write_zeroes": true, 00:20:02.080 "flush": true, 00:20:02.080 "reset": true, 00:20:02.080 "compare": false, 00:20:02.080 "compare_and_write": false, 00:20:02.080 "abort": true, 00:20:02.080 "nvme_admin": false, 00:20:02.080 "nvme_io": false 00:20:02.080 }, 00:20:02.080 "memory_domains": [ 00:20:02.080 { 00:20:02.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.080 "dma_device_type": 2 00:20:02.080 } 00:20:02.080 ], 00:20:02.080 "driver_specific": { 00:20:02.080 "passthru": { 00:20:02.080 "name": "Passthru0", 00:20:02.080 "base_bdev_name": "Malloc2" 00:20:02.080 } 00:20:02.080 } 00:20:02.080 } 00:20:02.080 ]' 00:20:02.080 15:57:04 -- rpc/rpc.sh@21 -- # jq length 00:20:02.080 15:57:04 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:20:02.080 15:57:04 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:20:02.080 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.080 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.080 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.080 15:57:04 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:02.080 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.080 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.080 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.080 15:57:04 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:02.080 15:57:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.080 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.080 15:57:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.080 15:57:04 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:20:02.339 15:57:04 -- rpc/rpc.sh@26 -- # jq length 00:20:02.339 15:57:04 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:20:02.339 00:20:02.339 real 0m0.288s 00:20:02.339 user 0m0.203s 00:20:02.339 sys 0m0.030s 00:20:02.339 15:57:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.339 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:20:02.339 ************************************ 00:20:02.339 END TEST rpc_daemon_integrity 00:20:02.339 ************************************ 00:20:02.339 15:57:05 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:02.339 15:57:05 -- rpc/rpc.sh@84 -- # killprocess 53776 00:20:02.339 15:57:05 -- common/autotest_common.sh@926 -- # '[' -z 53776 ']' 00:20:02.339 15:57:05 -- common/autotest_common.sh@930 -- # kill -0 53776 00:20:02.339 15:57:05 -- common/autotest_common.sh@931 -- # uname 00:20:02.339 15:57:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:02.339 15:57:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53776 00:20:02.339 15:57:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:02.339 killing process with pid 53776 00:20:02.339 15:57:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:02.339 15:57:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53776' 00:20:02.339 15:57:05 -- common/autotest_common.sh@945 -- # kill 53776 00:20:02.339 15:57:05 -- common/autotest_common.sh@950 -- # wait 53776 00:20:02.598 00:20:02.598 real 0m2.664s 00:20:02.598 user 0m3.635s 00:20:02.598 sys 0m0.502s 00:20:02.598 15:57:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.598 15:57:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.598 ************************************ 00:20:02.598 END TEST rpc 00:20:02.598 ************************************ 00:20:02.598 15:57:05 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:20:02.598 15:57:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:02.598 15:57:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.598 15:57:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.598 ************************************ 00:20:02.598 START TEST rpc_client 00:20:02.598 ************************************ 00:20:02.598 15:57:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:20:02.598 * Looking for test storage... 00:20:02.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:20:02.598 15:57:05 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:20:02.856 OK 00:20:02.856 15:57:05 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:20:02.856 00:20:02.856 real 0m0.086s 00:20:02.856 user 0m0.040s 00:20:02.856 sys 0m0.050s 00:20:02.856 15:57:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.856 15:57:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.856 ************************************ 00:20:02.856 END TEST rpc_client 00:20:02.856 ************************************ 00:20:02.856 15:57:05 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:20:02.856 15:57:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:02.856 15:57:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.856 15:57:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.856 ************************************ 00:20:02.856 START TEST json_config 00:20:02.856 ************************************ 00:20:02.856 15:57:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:20:02.856 15:57:05 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:02.856 15:57:05 -- nvmf/common.sh@7 -- # uname -s 00:20:02.856 15:57:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.856 15:57:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.856 15:57:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.856 15:57:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.856 15:57:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.856 15:57:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.856 15:57:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.856 15:57:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.856 15:57:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.856 15:57:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.856 15:57:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:20:02.856 15:57:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:20:02.856 15:57:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.856 15:57:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.856 15:57:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:02.856 15:57:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:02.856 15:57:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.857 15:57:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.857 15:57:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.857 15:57:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.857 15:57:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.857 15:57:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.857 15:57:05 -- paths/export.sh@5 -- # export PATH 00:20:02.857 15:57:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.857 15:57:05 -- nvmf/common.sh@46 -- # : 0 00:20:02.857 15:57:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:02.857 15:57:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:02.857 15:57:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:02.857 15:57:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.857 15:57:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.857 15:57:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:02.857 15:57:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:02.857 15:57:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:02.857 15:57:05 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:20:02.857 15:57:05 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:20:02.857 15:57:05 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:20:02.857 15:57:05 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:20:02.857 15:57:05 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:20:02.857 15:57:05 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:20:02.857 15:57:05 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:20:02.857 15:57:05 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:20:02.857 15:57:05 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:20:02.857 15:57:05 -- json_config/json_config.sh@32 -- # declare -A app_params 00:20:02.857 15:57:05 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:20:02.857 15:57:05 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:20:02.857 15:57:05 -- json_config/json_config.sh@43 -- # last_event_id=0 00:20:02.857 15:57:05 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:20:02.857 INFO: JSON configuration test init 00:20:02.857 15:57:05 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:20:02.857 15:57:05 -- json_config/json_config.sh@420 -- # json_config_test_init 00:20:02.857 15:57:05 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:20:02.857 15:57:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:02.857 15:57:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.857 15:57:05 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:20:02.857 15:57:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:02.857 15:57:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.857 15:57:05 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:20:02.857 15:57:05 -- json_config/json_config.sh@98 -- # local app=target 00:20:02.857 15:57:05 -- json_config/json_config.sh@99 -- # shift 00:20:02.857 15:57:05 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:20:02.857 15:57:05 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:20:02.857 15:57:05 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:20:02.857 15:57:05 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:20:02.857 15:57:05 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:20:02.857 15:57:05 -- json_config/json_config.sh@111 -- # app_pid[$app]=54013 00:20:02.857 15:57:05 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:20:02.857 Waiting for target to run... 00:20:02.857 15:57:05 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:20:02.857 15:57:05 -- json_config/json_config.sh@114 -- # waitforlisten 54013 /var/tmp/spdk_tgt.sock 00:20:02.857 15:57:05 -- common/autotest_common.sh@819 -- # '[' -z 54013 ']' 00:20:02.857 15:57:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:20:02.857 15:57:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:02.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:20:02.857 15:57:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:20:02.857 15:57:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:02.857 15:57:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.857 [2024-07-22 15:57:05.648303] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:02.857 [2024-07-22 15:57:05.648395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54013 ] 00:20:03.116 [2024-07-22 15:57:05.927251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.116 [2024-07-22 15:57:05.974710] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:03.116 [2024-07-22 15:57:05.974898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.050 15:57:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:04.050 00:20:04.050 15:57:06 -- common/autotest_common.sh@852 -- # return 0 00:20:04.050 15:57:06 -- json_config/json_config.sh@115 -- # echo '' 00:20:04.050 15:57:06 -- json_config/json_config.sh@322 -- # create_accel_config 00:20:04.050 15:57:06 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:20:04.050 15:57:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:04.050 15:57:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.050 15:57:06 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:20:04.050 15:57:06 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:20:04.050 15:57:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:04.050 15:57:06 -- common/autotest_common.sh@10 -- # set +x 00:20:04.050 15:57:06 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:20:04.050 15:57:06 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:20:04.050 15:57:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:20:04.309 15:57:07 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:20:04.309 15:57:07 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:20:04.309 15:57:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:04.309 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:04.309 15:57:07 -- json_config/json_config.sh@48 -- # local ret=0 00:20:04.309 15:57:07 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:20:04.309 15:57:07 -- json_config/json_config.sh@49 -- # local enabled_types 00:20:04.309 15:57:07 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:20:04.309 15:57:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:20:04.309 15:57:07 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:20:04.875 15:57:07 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:20:04.875 15:57:07 -- json_config/json_config.sh@51 -- # local get_types 00:20:04.875 15:57:07 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:20:04.875 15:57:07 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:20:04.875 15:57:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:04.875 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:04.875 15:57:07 -- json_config/json_config.sh@58 -- # return 0 00:20:04.875 15:57:07 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:20:04.875 15:57:07 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:20:04.876 15:57:07 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:20:04.876 15:57:07 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:20:04.876 15:57:07 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:20:04.876 15:57:07 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:20:04.876 15:57:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:04.876 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:20:04.876 15:57:07 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:20:04.876 15:57:07 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:20:04.876 15:57:07 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:20:04.876 15:57:07 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:20:04.876 15:57:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:20:05.134 MallocForNvmf0 00:20:05.134 15:57:07 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:20:05.134 15:57:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:20:05.392 MallocForNvmf1 00:20:05.392 15:57:08 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:20:05.392 15:57:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:20:05.650 [2024-07-22 15:57:08.367297] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.650 15:57:08 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.650 15:57:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.909 15:57:08 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:20:05.909 15:57:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:20:06.167 15:57:08 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:20:06.167 15:57:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:20:06.426 15:57:09 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:20:06.426 15:57:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:20:06.684 [2024-07-22 15:57:09.347856] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:06.684 15:57:09 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:20:06.684 15:57:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:06.684 15:57:09 -- common/autotest_common.sh@10 -- # set +x 00:20:06.684 15:57:09 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:20:06.684 15:57:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:06.684 15:57:09 -- common/autotest_common.sh@10 -- # set +x 00:20:06.684 15:57:09 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:20:06.684 15:57:09 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:20:06.684 15:57:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:20:06.943 MallocBdevForConfigChangeCheck 00:20:06.943 15:57:09 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:20:06.943 15:57:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:06.943 15:57:09 -- common/autotest_common.sh@10 -- # set +x 00:20:06.943 15:57:09 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:20:06.943 15:57:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:20:07.511 INFO: shutting down applications... 00:20:07.511 15:57:10 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:20:07.511 15:57:10 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:20:07.511 15:57:10 -- json_config/json_config.sh@431 -- # json_config_clear target 00:20:07.511 15:57:10 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:20:07.511 15:57:10 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:20:07.770 Calling clear_iscsi_subsystem 00:20:07.770 Calling clear_nvmf_subsystem 00:20:07.770 Calling clear_nbd_subsystem 00:20:07.770 Calling clear_ublk_subsystem 00:20:07.770 Calling clear_vhost_blk_subsystem 00:20:07.770 Calling clear_vhost_scsi_subsystem 00:20:07.770 Calling clear_scheduler_subsystem 00:20:07.770 Calling clear_bdev_subsystem 00:20:07.770 Calling clear_accel_subsystem 00:20:07.770 Calling clear_vmd_subsystem 00:20:07.770 Calling clear_sock_subsystem 00:20:07.770 Calling clear_iobuf_subsystem 00:20:07.770 15:57:10 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:20:07.770 15:57:10 -- json_config/json_config.sh@396 -- # count=100 00:20:07.770 15:57:10 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:20:07.770 15:57:10 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:20:07.770 15:57:10 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:20:07.770 15:57:10 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:20:08.028 15:57:10 -- json_config/json_config.sh@398 -- # break 00:20:08.029 15:57:10 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:20:08.029 15:57:10 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:20:08.029 15:57:10 -- json_config/json_config.sh@120 -- # local app=target 00:20:08.029 15:57:10 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:20:08.029 15:57:10 -- json_config/json_config.sh@124 -- # [[ -n 54013 ]] 00:20:08.029 15:57:10 -- json_config/json_config.sh@127 -- # kill -SIGINT 54013 00:20:08.029 15:57:10 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:20:08.029 15:57:10 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:20:08.029 15:57:10 -- json_config/json_config.sh@130 -- # kill -0 54013 00:20:08.029 15:57:10 -- json_config/json_config.sh@134 -- # sleep 0.5 00:20:08.595 15:57:11 -- json_config/json_config.sh@129 -- # (( i++ )) 00:20:08.595 15:57:11 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:20:08.595 15:57:11 -- json_config/json_config.sh@130 -- # kill -0 54013 00:20:08.595 15:57:11 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:20:08.595 15:57:11 -- json_config/json_config.sh@132 -- # break 00:20:08.595 15:57:11 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:20:08.595 SPDK target shutdown done 00:20:08.595 INFO: relaunching applications... 00:20:08.595 15:57:11 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:20:08.595 15:57:11 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:20:08.595 15:57:11 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:20:08.595 15:57:11 -- json_config/json_config.sh@98 -- # local app=target 00:20:08.595 15:57:11 -- json_config/json_config.sh@99 -- # shift 00:20:08.595 15:57:11 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:20:08.595 15:57:11 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:20:08.596 15:57:11 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:20:08.596 15:57:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:20:08.596 15:57:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:20:08.596 15:57:11 -- json_config/json_config.sh@111 -- # app_pid[$app]=54198 00:20:08.596 Waiting for target to run... 00:20:08.596 15:57:11 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:20:08.596 15:57:11 -- json_config/json_config.sh@114 -- # waitforlisten 54198 /var/tmp/spdk_tgt.sock 00:20:08.596 15:57:11 -- common/autotest_common.sh@819 -- # '[' -z 54198 ']' 00:20:08.596 15:57:11 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:20:08.596 15:57:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:20:08.596 15:57:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:08.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:20:08.596 15:57:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:20:08.596 15:57:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:08.596 15:57:11 -- common/autotest_common.sh@10 -- # set +x 00:20:08.596 [2024-07-22 15:57:11.369095] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:08.596 [2024-07-22 15:57:11.369193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54198 ] 00:20:08.854 [2024-07-22 15:57:11.650881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.111 [2024-07-22 15:57:11.718677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:09.111 [2024-07-22 15:57:11.718916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.369 [2024-07-22 15:57:12.021094] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.369 [2024-07-22 15:57:12.053179] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:09.628 15:57:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:09.628 15:57:12 -- common/autotest_common.sh@852 -- # return 0 00:20:09.628 00:20:09.628 15:57:12 -- json_config/json_config.sh@115 -- # echo '' 00:20:09.628 15:57:12 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:20:09.628 INFO: Checking if target configuration is the same... 00:20:09.628 15:57:12 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:20:09.628 15:57:12 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:20:09.628 15:57:12 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:20:09.628 15:57:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:20:09.628 + '[' 2 -ne 2 ']' 00:20:09.628 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:20:09.628 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:20:09.628 + rootdir=/home/vagrant/spdk_repo/spdk 00:20:09.628 +++ basename /dev/fd/62 00:20:09.628 ++ mktemp /tmp/62.XXX 00:20:09.628 + tmp_file_1=/tmp/62.yFW 00:20:09.628 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:20:09.628 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:20:09.628 + tmp_file_2=/tmp/spdk_tgt_config.json.HVv 00:20:09.628 + ret=0 00:20:09.628 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:20:10.193 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:20:10.193 + diff -u /tmp/62.yFW /tmp/spdk_tgt_config.json.HVv 00:20:10.193 INFO: JSON config files are the same 00:20:10.193 + echo 'INFO: JSON config files are the same' 00:20:10.193 + rm /tmp/62.yFW /tmp/spdk_tgt_config.json.HVv 00:20:10.193 + exit 0 00:20:10.193 15:57:12 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:20:10.193 15:57:12 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:20:10.193 INFO: changing configuration and checking if this can be detected... 00:20:10.193 15:57:12 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:20:10.193 15:57:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:20:10.451 15:57:13 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:20:10.451 15:57:13 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:20:10.451 15:57:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:20:10.451 + '[' 2 -ne 2 ']' 00:20:10.451 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:20:10.451 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:20:10.451 + rootdir=/home/vagrant/spdk_repo/spdk 00:20:10.451 +++ basename /dev/fd/62 00:20:10.451 ++ mktemp /tmp/62.XXX 00:20:10.451 + tmp_file_1=/tmp/62.TsD 00:20:10.451 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:20:10.451 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:20:10.452 + tmp_file_2=/tmp/spdk_tgt_config.json.Tk5 00:20:10.452 + ret=0 00:20:10.452 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:20:10.710 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:20:10.969 + diff -u /tmp/62.TsD /tmp/spdk_tgt_config.json.Tk5 00:20:10.969 + ret=1 00:20:10.969 + echo '=== Start of file: /tmp/62.TsD ===' 00:20:10.969 + cat /tmp/62.TsD 00:20:10.970 + echo '=== End of file: /tmp/62.TsD ===' 00:20:10.970 + echo '' 00:20:10.970 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Tk5 ===' 00:20:10.970 + cat /tmp/spdk_tgt_config.json.Tk5 00:20:10.970 + echo '=== End of file: /tmp/spdk_tgt_config.json.Tk5 ===' 00:20:10.970 + echo '' 00:20:10.970 + rm /tmp/62.TsD /tmp/spdk_tgt_config.json.Tk5 00:20:10.970 + exit 1 00:20:10.970 INFO: configuration change detected. 00:20:10.970 15:57:13 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:20:10.970 15:57:13 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:20:10.970 15:57:13 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:20:10.970 15:57:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:10.970 15:57:13 -- common/autotest_common.sh@10 -- # set +x 00:20:10.970 15:57:13 -- json_config/json_config.sh@360 -- # local ret=0 00:20:10.970 15:57:13 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:20:10.970 15:57:13 -- json_config/json_config.sh@370 -- # [[ -n 54198 ]] 00:20:10.970 15:57:13 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:20:10.970 15:57:13 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:20:10.970 15:57:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:10.970 15:57:13 -- common/autotest_common.sh@10 -- # set +x 00:20:10.970 15:57:13 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:20:10.970 15:57:13 -- json_config/json_config.sh@246 -- # uname -s 00:20:10.970 15:57:13 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:20:10.970 15:57:13 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:20:10.970 15:57:13 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:20:10.970 15:57:13 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:20:10.970 15:57:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:10.970 15:57:13 -- common/autotest_common.sh@10 -- # set +x 00:20:10.970 15:57:13 -- json_config/json_config.sh@376 -- # killprocess 54198 00:20:10.970 15:57:13 -- common/autotest_common.sh@926 -- # '[' -z 54198 ']' 00:20:10.970 15:57:13 -- common/autotest_common.sh@930 -- # kill -0 54198 00:20:10.970 15:57:13 -- common/autotest_common.sh@931 -- # uname 00:20:10.970 15:57:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.970 15:57:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54198 00:20:10.970 15:57:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:10.970 15:57:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:10.970 killing process with pid 54198 00:20:10.970 15:57:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54198' 00:20:10.970 15:57:13 -- common/autotest_common.sh@945 -- # kill 54198 00:20:10.970 15:57:13 -- common/autotest_common.sh@950 -- # wait 54198 00:20:11.229 15:57:13 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:20:11.229 15:57:13 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:20:11.229 15:57:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:11.229 15:57:13 -- common/autotest_common.sh@10 -- # set +x 00:20:11.229 15:57:13 -- json_config/json_config.sh@381 -- # return 0 00:20:11.229 15:57:13 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:20:11.229 INFO: Success 00:20:11.229 00:20:11.229 real 0m8.460s 00:20:11.229 user 0m12.609s 00:20:11.229 sys 0m1.345s 00:20:11.229 15:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.229 ************************************ 00:20:11.229 15:57:13 -- common/autotest_common.sh@10 -- # set +x 00:20:11.229 END TEST json_config 00:20:11.229 ************************************ 00:20:11.229 15:57:14 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:20:11.229 15:57:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:11.229 15:57:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:11.229 15:57:14 -- common/autotest_common.sh@10 -- # set +x 00:20:11.229 ************************************ 00:20:11.229 START TEST json_config_extra_key 00:20:11.229 ************************************ 00:20:11.229 15:57:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:11.229 15:57:14 -- nvmf/common.sh@7 -- # uname -s 00:20:11.229 15:57:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.229 15:57:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.229 15:57:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.229 15:57:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.229 15:57:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.229 15:57:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.229 15:57:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.229 15:57:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.229 15:57:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.229 15:57:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.229 15:57:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:20:11.229 15:57:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:20:11.229 15:57:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.229 15:57:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.229 15:57:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:11.229 15:57:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:11.229 15:57:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.229 15:57:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.229 15:57:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.229 15:57:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.229 15:57:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.229 15:57:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.229 15:57:14 -- paths/export.sh@5 -- # export PATH 00:20:11.229 15:57:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.229 15:57:14 -- nvmf/common.sh@46 -- # : 0 00:20:11.229 15:57:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:11.229 15:57:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:11.229 15:57:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:11.229 15:57:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.229 15:57:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.229 15:57:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:11.229 15:57:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:11.229 15:57:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:20:11.229 INFO: launching applications... 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@25 -- # shift 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54343 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:20:11.229 Waiting for target to run... 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:20:11.229 15:57:14 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54343 /var/tmp/spdk_tgt.sock 00:20:11.229 15:57:14 -- common/autotest_common.sh@819 -- # '[' -z 54343 ']' 00:20:11.229 15:57:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:20:11.229 15:57:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:11.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:20:11.229 15:57:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:20:11.230 15:57:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:11.230 15:57:14 -- common/autotest_common.sh@10 -- # set +x 00:20:11.488 [2024-07-22 15:57:14.162664] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:11.488 [2024-07-22 15:57:14.162797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54343 ] 00:20:11.746 [2024-07-22 15:57:14.466154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.746 [2024-07-22 15:57:14.509823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:11.746 [2024-07-22 15:57:14.510013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.679 15:57:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:12.679 15:57:15 -- common/autotest_common.sh@852 -- # return 0 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:20:12.679 00:20:12.679 INFO: shutting down applications... 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54343 ]] 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54343 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54343 00:20:12.679 15:57:15 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:20:12.937 15:57:15 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:20:12.937 15:57:15 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:20:12.937 15:57:15 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54343 00:20:12.937 15:57:15 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:20:12.937 15:57:15 -- json_config/json_config_extra_key.sh@52 -- # break 00:20:12.937 15:57:15 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:20:12.937 SPDK target shutdown done 00:20:12.937 15:57:15 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:20:12.937 Success 00:20:12.937 15:57:15 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:20:12.937 00:20:12.937 real 0m1.691s 00:20:12.937 user 0m1.635s 00:20:12.937 sys 0m0.339s 00:20:12.937 15:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.937 15:57:15 -- common/autotest_common.sh@10 -- # set +x 00:20:12.937 ************************************ 00:20:12.937 END TEST json_config_extra_key 00:20:12.937 ************************************ 00:20:12.937 15:57:15 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:20:12.937 15:57:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:12.937 15:57:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:12.937 15:57:15 -- common/autotest_common.sh@10 -- # set +x 00:20:12.937 ************************************ 00:20:12.937 START TEST alias_rpc 00:20:12.937 ************************************ 00:20:12.937 15:57:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:20:13.195 * Looking for test storage... 00:20:13.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:20:13.195 15:57:15 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:20:13.195 15:57:15 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54407 00:20:13.195 15:57:15 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54407 00:20:13.195 15:57:15 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:13.195 15:57:15 -- common/autotest_common.sh@819 -- # '[' -z 54407 ']' 00:20:13.195 15:57:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.195 15:57:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:13.195 15:57:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.195 15:57:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:13.195 15:57:15 -- common/autotest_common.sh@10 -- # set +x 00:20:13.195 [2024-07-22 15:57:15.875209] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:13.195 [2024-07-22 15:57:15.875304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54407 ] 00:20:13.195 [2024-07-22 15:57:16.009863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.454 [2024-07-22 15:57:16.067371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:13.454 [2024-07-22 15:57:16.067560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.388 15:57:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.388 15:57:16 -- common/autotest_common.sh@852 -- # return 0 00:20:14.388 15:57:16 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:20:14.388 15:57:17 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54407 00:20:14.388 15:57:17 -- common/autotest_common.sh@926 -- # '[' -z 54407 ']' 00:20:14.388 15:57:17 -- common/autotest_common.sh@930 -- # kill -0 54407 00:20:14.388 15:57:17 -- common/autotest_common.sh@931 -- # uname 00:20:14.388 15:57:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:14.388 15:57:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54407 00:20:14.388 15:57:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:14.388 15:57:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:14.388 15:57:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54407' 00:20:14.388 killing process with pid 54407 00:20:14.388 15:57:17 -- common/autotest_common.sh@945 -- # kill 54407 00:20:14.388 15:57:17 -- common/autotest_common.sh@950 -- # wait 54407 00:20:14.646 00:20:14.646 real 0m1.740s 00:20:14.646 user 0m2.177s 00:20:14.646 sys 0m0.303s 00:20:14.646 15:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.646 15:57:17 -- common/autotest_common.sh@10 -- # set +x 00:20:14.646 ************************************ 00:20:14.646 END TEST alias_rpc 00:20:14.646 ************************************ 00:20:14.905 15:57:17 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:20:14.905 15:57:17 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:20:14.905 15:57:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:14.905 15:57:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:14.905 15:57:17 -- common/autotest_common.sh@10 -- # set +x 00:20:14.905 ************************************ 00:20:14.905 START TEST spdkcli_tcp 00:20:14.905 ************************************ 00:20:14.905 15:57:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:20:14.905 * Looking for test storage... 00:20:14.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:14.905 15:57:17 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:14.905 15:57:17 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:14.905 15:57:17 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:14.905 15:57:17 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:20:14.905 15:57:17 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:20:14.905 15:57:17 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.905 15:57:17 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:20:14.905 15:57:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:14.905 15:57:17 -- common/autotest_common.sh@10 -- # set +x 00:20:14.905 15:57:17 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54476 00:20:14.905 15:57:17 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:14.905 15:57:17 -- spdkcli/tcp.sh@27 -- # waitforlisten 54476 00:20:14.905 15:57:17 -- common/autotest_common.sh@819 -- # '[' -z 54476 ']' 00:20:14.905 15:57:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.905 15:57:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:14.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.905 15:57:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.905 15:57:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:14.905 15:57:17 -- common/autotest_common.sh@10 -- # set +x 00:20:14.905 [2024-07-22 15:57:17.669302] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:14.905 [2024-07-22 15:57:17.669397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54476 ] 00:20:15.164 [2024-07-22 15:57:17.800837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:15.164 [2024-07-22 15:57:17.859502] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:15.164 [2024-07-22 15:57:17.859742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.164 [2024-07-22 15:57:17.859755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.098 15:57:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:16.098 15:57:18 -- common/autotest_common.sh@852 -- # return 0 00:20:16.098 15:57:18 -- spdkcli/tcp.sh@31 -- # socat_pid=54493 00:20:16.098 15:57:18 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:20:16.098 15:57:18 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:20:16.098 [ 00:20:16.098 "bdev_malloc_delete", 00:20:16.098 "bdev_malloc_create", 00:20:16.098 "bdev_null_resize", 00:20:16.098 "bdev_null_delete", 00:20:16.098 "bdev_null_create", 00:20:16.098 "bdev_nvme_cuse_unregister", 00:20:16.098 "bdev_nvme_cuse_register", 00:20:16.098 "bdev_opal_new_user", 00:20:16.098 "bdev_opal_set_lock_state", 00:20:16.098 "bdev_opal_delete", 00:20:16.098 "bdev_opal_get_info", 00:20:16.098 "bdev_opal_create", 00:20:16.098 "bdev_nvme_opal_revert", 00:20:16.098 "bdev_nvme_opal_init", 00:20:16.098 "bdev_nvme_send_cmd", 00:20:16.098 "bdev_nvme_get_path_iostat", 00:20:16.098 "bdev_nvme_get_mdns_discovery_info", 00:20:16.098 "bdev_nvme_stop_mdns_discovery", 00:20:16.098 "bdev_nvme_start_mdns_discovery", 00:20:16.098 "bdev_nvme_set_multipath_policy", 00:20:16.098 "bdev_nvme_set_preferred_path", 00:20:16.098 "bdev_nvme_get_io_paths", 00:20:16.098 "bdev_nvme_remove_error_injection", 00:20:16.098 "bdev_nvme_add_error_injection", 00:20:16.098 "bdev_nvme_get_discovery_info", 00:20:16.098 "bdev_nvme_stop_discovery", 00:20:16.098 "bdev_nvme_start_discovery", 00:20:16.098 "bdev_nvme_get_controller_health_info", 00:20:16.098 "bdev_nvme_disable_controller", 00:20:16.098 "bdev_nvme_enable_controller", 00:20:16.098 "bdev_nvme_reset_controller", 00:20:16.098 "bdev_nvme_get_transport_statistics", 00:20:16.098 "bdev_nvme_apply_firmware", 00:20:16.098 "bdev_nvme_detach_controller", 00:20:16.098 "bdev_nvme_get_controllers", 00:20:16.098 "bdev_nvme_attach_controller", 00:20:16.098 "bdev_nvme_set_hotplug", 00:20:16.098 "bdev_nvme_set_options", 00:20:16.098 "bdev_passthru_delete", 00:20:16.098 "bdev_passthru_create", 00:20:16.098 "bdev_lvol_grow_lvstore", 00:20:16.098 "bdev_lvol_get_lvols", 00:20:16.098 "bdev_lvol_get_lvstores", 00:20:16.098 "bdev_lvol_delete", 00:20:16.098 "bdev_lvol_set_read_only", 00:20:16.098 "bdev_lvol_resize", 00:20:16.098 "bdev_lvol_decouple_parent", 00:20:16.098 "bdev_lvol_inflate", 00:20:16.098 "bdev_lvol_rename", 00:20:16.098 "bdev_lvol_clone_bdev", 00:20:16.098 "bdev_lvol_clone", 00:20:16.098 "bdev_lvol_snapshot", 00:20:16.098 "bdev_lvol_create", 00:20:16.098 "bdev_lvol_delete_lvstore", 00:20:16.098 "bdev_lvol_rename_lvstore", 00:20:16.098 "bdev_lvol_create_lvstore", 00:20:16.098 "bdev_raid_set_options", 00:20:16.098 "bdev_raid_remove_base_bdev", 00:20:16.098 "bdev_raid_add_base_bdev", 00:20:16.098 "bdev_raid_delete", 00:20:16.098 "bdev_raid_create", 00:20:16.098 "bdev_raid_get_bdevs", 00:20:16.098 "bdev_error_inject_error", 00:20:16.098 "bdev_error_delete", 00:20:16.098 "bdev_error_create", 00:20:16.098 "bdev_split_delete", 00:20:16.098 "bdev_split_create", 00:20:16.098 "bdev_delay_delete", 00:20:16.098 "bdev_delay_create", 00:20:16.098 "bdev_delay_update_latency", 00:20:16.098 "bdev_zone_block_delete", 00:20:16.098 "bdev_zone_block_create", 00:20:16.098 "blobfs_create", 00:20:16.098 "blobfs_detect", 00:20:16.098 "blobfs_set_cache_size", 00:20:16.098 "bdev_aio_delete", 00:20:16.098 "bdev_aio_rescan", 00:20:16.098 "bdev_aio_create", 00:20:16.098 "bdev_ftl_set_property", 00:20:16.098 "bdev_ftl_get_properties", 00:20:16.098 "bdev_ftl_get_stats", 00:20:16.098 "bdev_ftl_unmap", 00:20:16.098 "bdev_ftl_unload", 00:20:16.098 "bdev_ftl_delete", 00:20:16.098 "bdev_ftl_load", 00:20:16.098 "bdev_ftl_create", 00:20:16.098 "bdev_virtio_attach_controller", 00:20:16.098 "bdev_virtio_scsi_get_devices", 00:20:16.098 "bdev_virtio_detach_controller", 00:20:16.098 "bdev_virtio_blk_set_hotplug", 00:20:16.098 "bdev_iscsi_delete", 00:20:16.098 "bdev_iscsi_create", 00:20:16.098 "bdev_iscsi_set_options", 00:20:16.098 "bdev_uring_delete", 00:20:16.098 "bdev_uring_create", 00:20:16.098 "accel_error_inject_error", 00:20:16.098 "ioat_scan_accel_module", 00:20:16.098 "dsa_scan_accel_module", 00:20:16.098 "iaa_scan_accel_module", 00:20:16.098 "vfu_virtio_create_scsi_endpoint", 00:20:16.098 "vfu_virtio_scsi_remove_target", 00:20:16.098 "vfu_virtio_scsi_add_target", 00:20:16.098 "vfu_virtio_create_blk_endpoint", 00:20:16.098 "vfu_virtio_delete_endpoint", 00:20:16.098 "iscsi_set_options", 00:20:16.098 "iscsi_get_auth_groups", 00:20:16.098 "iscsi_auth_group_remove_secret", 00:20:16.098 "iscsi_auth_group_add_secret", 00:20:16.098 "iscsi_delete_auth_group", 00:20:16.098 "iscsi_create_auth_group", 00:20:16.098 "iscsi_set_discovery_auth", 00:20:16.098 "iscsi_get_options", 00:20:16.098 "iscsi_target_node_request_logout", 00:20:16.099 "iscsi_target_node_set_redirect", 00:20:16.099 "iscsi_target_node_set_auth", 00:20:16.099 "iscsi_target_node_add_lun", 00:20:16.099 "iscsi_get_connections", 00:20:16.099 "iscsi_portal_group_set_auth", 00:20:16.099 "iscsi_start_portal_group", 00:20:16.099 "iscsi_delete_portal_group", 00:20:16.099 "iscsi_create_portal_group", 00:20:16.099 "iscsi_get_portal_groups", 00:20:16.099 "iscsi_delete_target_node", 00:20:16.099 "iscsi_target_node_remove_pg_ig_maps", 00:20:16.099 "iscsi_target_node_add_pg_ig_maps", 00:20:16.099 "iscsi_create_target_node", 00:20:16.099 "iscsi_get_target_nodes", 00:20:16.099 "iscsi_delete_initiator_group", 00:20:16.099 "iscsi_initiator_group_remove_initiators", 00:20:16.099 "iscsi_initiator_group_add_initiators", 00:20:16.099 "iscsi_create_initiator_group", 00:20:16.099 "iscsi_get_initiator_groups", 00:20:16.099 "nvmf_set_crdt", 00:20:16.099 "nvmf_set_config", 00:20:16.099 "nvmf_set_max_subsystems", 00:20:16.099 "nvmf_subsystem_get_listeners", 00:20:16.099 "nvmf_subsystem_get_qpairs", 00:20:16.099 "nvmf_subsystem_get_controllers", 00:20:16.099 "nvmf_get_stats", 00:20:16.099 "nvmf_get_transports", 00:20:16.099 "nvmf_create_transport", 00:20:16.099 "nvmf_get_targets", 00:20:16.099 "nvmf_delete_target", 00:20:16.099 "nvmf_create_target", 00:20:16.099 "nvmf_subsystem_allow_any_host", 00:20:16.099 "nvmf_subsystem_remove_host", 00:20:16.099 "nvmf_subsystem_add_host", 00:20:16.099 "nvmf_subsystem_remove_ns", 00:20:16.099 "nvmf_subsystem_add_ns", 00:20:16.099 "nvmf_subsystem_listener_set_ana_state", 00:20:16.099 "nvmf_discovery_get_referrals", 00:20:16.099 "nvmf_discovery_remove_referral", 00:20:16.099 "nvmf_discovery_add_referral", 00:20:16.099 "nvmf_subsystem_remove_listener", 00:20:16.099 "nvmf_subsystem_add_listener", 00:20:16.099 "nvmf_delete_subsystem", 00:20:16.099 "nvmf_create_subsystem", 00:20:16.099 "nvmf_get_subsystems", 00:20:16.099 "env_dpdk_get_mem_stats", 00:20:16.099 "nbd_get_disks", 00:20:16.099 "nbd_stop_disk", 00:20:16.099 "nbd_start_disk", 00:20:16.099 "ublk_recover_disk", 00:20:16.099 "ublk_get_disks", 00:20:16.099 "ublk_stop_disk", 00:20:16.099 "ublk_start_disk", 00:20:16.099 "ublk_destroy_target", 00:20:16.099 "ublk_create_target", 00:20:16.099 "virtio_blk_create_transport", 00:20:16.099 "virtio_blk_get_transports", 00:20:16.099 "vhost_controller_set_coalescing", 00:20:16.099 "vhost_get_controllers", 00:20:16.099 "vhost_delete_controller", 00:20:16.099 "vhost_create_blk_controller", 00:20:16.099 "vhost_scsi_controller_remove_target", 00:20:16.099 "vhost_scsi_controller_add_target", 00:20:16.099 "vhost_start_scsi_controller", 00:20:16.099 "vhost_create_scsi_controller", 00:20:16.099 "thread_set_cpumask", 00:20:16.099 "framework_get_scheduler", 00:20:16.099 "framework_set_scheduler", 00:20:16.099 "framework_get_reactors", 00:20:16.099 "thread_get_io_channels", 00:20:16.099 "thread_get_pollers", 00:20:16.099 "thread_get_stats", 00:20:16.099 "framework_monitor_context_switch", 00:20:16.099 "spdk_kill_instance", 00:20:16.099 "log_enable_timestamps", 00:20:16.099 "log_get_flags", 00:20:16.099 "log_clear_flag", 00:20:16.099 "log_set_flag", 00:20:16.099 "log_get_level", 00:20:16.099 "log_set_level", 00:20:16.099 "log_get_print_level", 00:20:16.099 "log_set_print_level", 00:20:16.099 "framework_enable_cpumask_locks", 00:20:16.099 "framework_disable_cpumask_locks", 00:20:16.099 "framework_wait_init", 00:20:16.099 "framework_start_init", 00:20:16.099 "scsi_get_devices", 00:20:16.099 "bdev_get_histogram", 00:20:16.099 "bdev_enable_histogram", 00:20:16.099 "bdev_set_qos_limit", 00:20:16.099 "bdev_set_qd_sampling_period", 00:20:16.099 "bdev_get_bdevs", 00:20:16.099 "bdev_reset_iostat", 00:20:16.099 "bdev_get_iostat", 00:20:16.099 "bdev_examine", 00:20:16.099 "bdev_wait_for_examine", 00:20:16.099 "bdev_set_options", 00:20:16.099 "notify_get_notifications", 00:20:16.099 "notify_get_types", 00:20:16.099 "accel_get_stats", 00:20:16.099 "accel_set_options", 00:20:16.099 "accel_set_driver", 00:20:16.099 "accel_crypto_key_destroy", 00:20:16.099 "accel_crypto_keys_get", 00:20:16.099 "accel_crypto_key_create", 00:20:16.099 "accel_assign_opc", 00:20:16.099 "accel_get_module_info", 00:20:16.099 "accel_get_opc_assignments", 00:20:16.099 "vmd_rescan", 00:20:16.099 "vmd_remove_device", 00:20:16.099 "vmd_enable", 00:20:16.099 "sock_set_default_impl", 00:20:16.099 "sock_impl_set_options", 00:20:16.099 "sock_impl_get_options", 00:20:16.099 "iobuf_get_stats", 00:20:16.099 "iobuf_set_options", 00:20:16.099 "framework_get_pci_devices", 00:20:16.099 "framework_get_config", 00:20:16.099 "framework_get_subsystems", 00:20:16.099 "vfu_tgt_set_base_path", 00:20:16.099 "trace_get_info", 00:20:16.099 "trace_get_tpoint_group_mask", 00:20:16.099 "trace_disable_tpoint_group", 00:20:16.099 "trace_enable_tpoint_group", 00:20:16.099 "trace_clear_tpoint_mask", 00:20:16.099 "trace_set_tpoint_mask", 00:20:16.099 "spdk_get_version", 00:20:16.099 "rpc_get_methods" 00:20:16.099 ] 00:20:16.099 15:57:18 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:20:16.099 15:57:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:16.099 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.358 15:57:18 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:16.358 15:57:18 -- spdkcli/tcp.sh@38 -- # killprocess 54476 00:20:16.358 15:57:18 -- common/autotest_common.sh@926 -- # '[' -z 54476 ']' 00:20:16.358 15:57:18 -- common/autotest_common.sh@930 -- # kill -0 54476 00:20:16.358 15:57:18 -- common/autotest_common.sh@931 -- # uname 00:20:16.358 15:57:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:16.358 15:57:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54476 00:20:16.358 15:57:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:16.358 killing process with pid 54476 00:20:16.358 15:57:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:16.358 15:57:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54476' 00:20:16.358 15:57:18 -- common/autotest_common.sh@945 -- # kill 54476 00:20:16.358 15:57:18 -- common/autotest_common.sh@950 -- # wait 54476 00:20:16.616 00:20:16.616 real 0m1.737s 00:20:16.616 user 0m3.471s 00:20:16.616 sys 0m0.320s 00:20:16.616 15:57:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.616 ************************************ 00:20:16.617 END TEST spdkcli_tcp 00:20:16.617 ************************************ 00:20:16.617 15:57:19 -- common/autotest_common.sh@10 -- # set +x 00:20:16.617 15:57:19 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:20:16.617 15:57:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:16.617 15:57:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:16.617 15:57:19 -- common/autotest_common.sh@10 -- # set +x 00:20:16.617 ************************************ 00:20:16.617 START TEST dpdk_mem_utility 00:20:16.617 ************************************ 00:20:16.617 15:57:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:20:16.617 * Looking for test storage... 00:20:16.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:20:16.617 15:57:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:20:16.617 15:57:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54566 00:20:16.617 15:57:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54566 00:20:16.617 15:57:19 -- common/autotest_common.sh@819 -- # '[' -z 54566 ']' 00:20:16.617 15:57:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:16.617 15:57:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.617 15:57:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:16.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.617 15:57:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.617 15:57:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:16.617 15:57:19 -- common/autotest_common.sh@10 -- # set +x 00:20:16.617 [2024-07-22 15:57:19.439644] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:16.617 [2024-07-22 15:57:19.439734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54566 ] 00:20:16.874 [2024-07-22 15:57:19.570463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.874 [2024-07-22 15:57:19.653645] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:16.874 [2024-07-22 15:57:19.653865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.811 15:57:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:17.811 15:57:20 -- common/autotest_common.sh@852 -- # return 0 00:20:17.811 15:57:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:20:17.811 15:57:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:20:17.811 15:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:17.811 15:57:20 -- common/autotest_common.sh@10 -- # set +x 00:20:17.811 { 00:20:17.811 "filename": "/tmp/spdk_mem_dump.txt" 00:20:17.811 } 00:20:17.811 15:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:17.811 15:57:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:20:17.811 DPDK memory size 814.000000 MiB in 1 heap(s) 00:20:17.811 1 heaps totaling size 814.000000 MiB 00:20:17.811 size: 814.000000 MiB heap id: 0 00:20:17.811 end heaps---------- 00:20:17.811 8 mempools totaling size 598.116089 MiB 00:20:17.811 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:20:17.811 size: 158.602051 MiB name: PDU_data_out_Pool 00:20:17.811 size: 84.521057 MiB name: bdev_io_54566 00:20:17.811 size: 51.011292 MiB name: evtpool_54566 00:20:17.811 size: 50.003479 MiB name: msgpool_54566 00:20:17.811 size: 21.763794 MiB name: PDU_Pool 00:20:17.811 size: 19.513306 MiB name: SCSI_TASK_Pool 00:20:17.811 size: 0.026123 MiB name: Session_Pool 00:20:17.811 end mempools------- 00:20:17.811 6 memzones totaling size 4.142822 MiB 00:20:17.811 size: 1.000366 MiB name: RG_ring_0_54566 00:20:17.811 size: 1.000366 MiB name: RG_ring_1_54566 00:20:17.811 size: 1.000366 MiB name: RG_ring_4_54566 00:20:17.811 size: 1.000366 MiB name: RG_ring_5_54566 00:20:17.811 size: 0.125366 MiB name: RG_ring_2_54566 00:20:17.811 size: 0.015991 MiB name: RG_ring_3_54566 00:20:17.811 end memzones------- 00:20:17.811 15:57:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:20:17.811 heap id: 0 total size: 814.000000 MiB number of busy elements: 296 number of free elements: 15 00:20:17.811 list of free elements. size: 12.472656 MiB 00:20:17.811 element at address: 0x200000400000 with size: 1.999512 MiB 00:20:17.811 element at address: 0x200018e00000 with size: 0.999878 MiB 00:20:17.811 element at address: 0x200019000000 with size: 0.999878 MiB 00:20:17.811 element at address: 0x200003e00000 with size: 0.996277 MiB 00:20:17.811 element at address: 0x200031c00000 with size: 0.994446 MiB 00:20:17.811 element at address: 0x200013800000 with size: 0.978699 MiB 00:20:17.811 element at address: 0x200007000000 with size: 0.959839 MiB 00:20:17.811 element at address: 0x200019200000 with size: 0.936584 MiB 00:20:17.811 element at address: 0x200000200000 with size: 0.832825 MiB 00:20:17.811 element at address: 0x20001aa00000 with size: 0.569702 MiB 00:20:17.811 element at address: 0x20000b200000 with size: 0.488892 MiB 00:20:17.811 element at address: 0x200000800000 with size: 0.486145 MiB 00:20:17.811 element at address: 0x200019400000 with size: 0.485657 MiB 00:20:17.811 element at address: 0x200027e00000 with size: 0.396484 MiB 00:20:17.811 element at address: 0x200003a00000 with size: 0.347839 MiB 00:20:17.811 list of standard malloc elements. size: 199.264771 MiB 00:20:17.811 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:20:17.811 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:20:17.811 element at address: 0x200018efff80 with size: 1.000122 MiB 00:20:17.811 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:20:17.811 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:20:17.811 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:20:17.811 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:20:17.811 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:20:17.811 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:20:17.811 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:20:17.811 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087c740 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087c800 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087c980 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59180 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59240 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59300 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59480 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59540 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59600 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59780 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59840 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59900 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003adb300 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003adb500 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003affa80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003affb40 with size: 0.000183 MiB 00:20:17.812 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:20:17.812 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:20:17.813 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:20:17.813 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:20:17.813 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:20:17.813 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:20:17.814 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e65800 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e658c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6c4c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:20:17.814 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:20:17.815 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:20:17.815 list of memzone associated elements. size: 602.262573 MiB 00:20:17.815 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:20:17.815 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:20:17.815 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:20:17.815 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:20:17.815 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:20:17.815 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54566_0 00:20:17.815 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:20:17.815 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54566_0 00:20:17.815 element at address: 0x200003fff380 with size: 48.003052 MiB 00:20:17.815 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54566_0 00:20:17.815 element at address: 0x2000195be940 with size: 20.255554 MiB 00:20:17.815 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:20:17.815 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:20:17.815 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:20:17.815 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:20:17.815 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54566 00:20:17.815 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:20:17.815 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54566 00:20:17.815 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:20:17.815 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54566 00:20:17.815 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:20:17.815 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:20:17.815 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:20:17.815 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:20:17.815 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:20:17.815 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:20:17.815 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:20:17.815 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:20:17.815 element at address: 0x200003eff180 with size: 1.000488 MiB 00:20:17.815 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54566 00:20:17.815 element at address: 0x200003affc00 with size: 1.000488 MiB 00:20:17.815 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54566 00:20:17.815 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:20:17.815 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54566 00:20:17.815 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:20:17.815 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54566 00:20:17.815 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:20:17.815 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54566 00:20:17.815 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:20:17.815 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:20:17.815 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:20:17.815 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:20:17.815 element at address: 0x20001947c540 with size: 0.250488 MiB 00:20:17.815 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:20:17.815 element at address: 0x200003adf880 with size: 0.125488 MiB 00:20:17.815 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54566 00:20:17.815 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:20:17.815 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:20:17.815 element at address: 0x200027e65980 with size: 0.023743 MiB 00:20:17.815 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:20:17.815 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:20:17.815 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54566 00:20:17.815 element at address: 0x200027e6bac0 with size: 0.002441 MiB 00:20:17.815 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:20:17.815 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:20:17.815 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54566 00:20:17.815 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:20:17.815 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54566 00:20:17.815 element at address: 0x200027e6c580 with size: 0.000305 MiB 00:20:17.815 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:20:17.815 15:57:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:20:17.815 15:57:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54566 00:20:17.815 15:57:20 -- common/autotest_common.sh@926 -- # '[' -z 54566 ']' 00:20:17.816 15:57:20 -- common/autotest_common.sh@930 -- # kill -0 54566 00:20:17.816 15:57:20 -- common/autotest_common.sh@931 -- # uname 00:20:17.816 15:57:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.816 15:57:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54566 00:20:17.816 15:57:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:17.816 killing process with pid 54566 00:20:17.816 15:57:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:17.816 15:57:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54566' 00:20:17.816 15:57:20 -- common/autotest_common.sh@945 -- # kill 54566 00:20:17.816 15:57:20 -- common/autotest_common.sh@950 -- # wait 54566 00:20:18.105 00:20:18.105 real 0m1.587s 00:20:18.105 user 0m1.876s 00:20:18.105 sys 0m0.311s 00:20:18.105 15:57:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.105 ************************************ 00:20:18.105 END TEST dpdk_mem_utility 00:20:18.105 ************************************ 00:20:18.105 15:57:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.105 15:57:20 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:20:18.105 15:57:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:18.105 15:57:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:18.105 15:57:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.105 ************************************ 00:20:18.105 START TEST event 00:20:18.105 ************************************ 00:20:18.105 15:57:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:20:18.363 * Looking for test storage... 00:20:18.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:20:18.363 15:57:21 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:18.363 15:57:21 -- bdev/nbd_common.sh@6 -- # set -e 00:20:18.363 15:57:21 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:20:18.363 15:57:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:18.363 15:57:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:18.363 15:57:21 -- common/autotest_common.sh@10 -- # set +x 00:20:18.363 ************************************ 00:20:18.363 START TEST event_perf 00:20:18.363 ************************************ 00:20:18.363 15:57:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:20:18.363 Running I/O for 1 seconds...[2024-07-22 15:57:21.032670] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:18.363 [2024-07-22 15:57:21.032787] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54637 ] 00:20:18.363 [2024-07-22 15:57:21.184866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.622 [2024-07-22 15:57:21.271100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.622 [2024-07-22 15:57:21.271153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.622 [2024-07-22 15:57:21.271235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.622 [2024-07-22 15:57:21.271246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.557 Running I/O for 1 seconds... 00:20:19.557 lcore 0: 171782 00:20:19.557 lcore 1: 171780 00:20:19.557 lcore 2: 171782 00:20:19.557 lcore 3: 171783 00:20:19.557 done. 00:20:19.557 00:20:19.557 real 0m1.366s 00:20:19.557 user 0m4.185s 00:20:19.557 sys 0m0.056s 00:20:19.557 15:57:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.557 15:57:22 -- common/autotest_common.sh@10 -- # set +x 00:20:19.557 ************************************ 00:20:19.557 END TEST event_perf 00:20:19.557 ************************************ 00:20:19.557 15:57:22 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:20:19.557 15:57:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:19.557 15:57:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:19.557 15:57:22 -- common/autotest_common.sh@10 -- # set +x 00:20:19.816 ************************************ 00:20:19.816 START TEST event_reactor 00:20:19.816 ************************************ 00:20:19.816 15:57:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:20:19.817 [2024-07-22 15:57:22.438603] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:19.817 [2024-07-22 15:57:22.438884] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54675 ] 00:20:19.817 [2024-07-22 15:57:22.569392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.817 [2024-07-22 15:57:22.652538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.196 test_start 00:20:21.196 oneshot 00:20:21.196 tick 100 00:20:21.196 tick 100 00:20:21.196 tick 250 00:20:21.196 tick 100 00:20:21.196 tick 100 00:20:21.196 tick 250 00:20:21.196 tick 500 00:20:21.196 tick 100 00:20:21.196 tick 100 00:20:21.196 tick 100 00:20:21.196 tick 250 00:20:21.196 tick 100 00:20:21.196 tick 100 00:20:21.196 test_end 00:20:21.196 00:20:21.196 real 0m1.330s 00:20:21.196 user 0m1.179s 00:20:21.196 sys 0m0.043s 00:20:21.196 15:57:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.196 15:57:23 -- common/autotest_common.sh@10 -- # set +x 00:20:21.196 ************************************ 00:20:21.196 END TEST event_reactor 00:20:21.196 ************************************ 00:20:21.196 15:57:23 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:20:21.196 15:57:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:21.196 15:57:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:21.196 15:57:23 -- common/autotest_common.sh@10 -- # set +x 00:20:21.196 ************************************ 00:20:21.196 START TEST event_reactor_perf 00:20:21.196 ************************************ 00:20:21.196 15:57:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:20:21.196 [2024-07-22 15:57:23.807242] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:21.196 [2024-07-22 15:57:23.807365] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54705 ] 00:20:21.196 [2024-07-22 15:57:23.949557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.196 [2024-07-22 15:57:24.007093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.571 test_start 00:20:22.572 test_end 00:20:22.572 Performance: 352994 events per second 00:20:22.572 ************************************ 00:20:22.572 END TEST event_reactor_perf 00:20:22.572 ************************************ 00:20:22.572 00:20:22.572 real 0m1.314s 00:20:22.572 user 0m1.157s 00:20:22.572 sys 0m0.049s 00:20:22.572 15:57:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.572 15:57:25 -- common/autotest_common.sh@10 -- # set +x 00:20:22.572 15:57:25 -- event/event.sh@49 -- # uname -s 00:20:22.572 15:57:25 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:20:22.572 15:57:25 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:20:22.572 15:57:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:22.572 15:57:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:22.572 15:57:25 -- common/autotest_common.sh@10 -- # set +x 00:20:22.572 ************************************ 00:20:22.572 START TEST event_scheduler 00:20:22.572 ************************************ 00:20:22.572 15:57:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:20:22.572 * Looking for test storage... 00:20:22.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:20:22.572 15:57:25 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:20:22.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.572 15:57:25 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54766 00:20:22.572 15:57:25 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:20:22.572 15:57:25 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:20:22.572 15:57:25 -- scheduler/scheduler.sh@37 -- # waitforlisten 54766 00:20:22.572 15:57:25 -- common/autotest_common.sh@819 -- # '[' -z 54766 ']' 00:20:22.572 15:57:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.572 15:57:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:22.572 15:57:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.572 15:57:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:22.572 15:57:25 -- common/autotest_common.sh@10 -- # set +x 00:20:22.572 [2024-07-22 15:57:25.275849] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:22.572 [2024-07-22 15:57:25.275977] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54766 ] 00:20:22.572 [2024-07-22 15:57:25.419655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.830 [2024-07-22 15:57:25.481753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.830 [2024-07-22 15:57:25.485528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.830 [2024-07-22 15:57:25.485645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.830 [2024-07-22 15:57:25.485656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.809 15:57:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:23.809 15:57:26 -- common/autotest_common.sh@852 -- # return 0 00:20:23.809 15:57:26 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:20:23.809 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.809 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.809 POWER: Env isn't set yet! 00:20:23.809 POWER: Attempting to initialise ACPI cpufreq power management... 00:20:23.809 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:23.809 POWER: Cannot set governor of lcore 0 to userspace 00:20:23.809 POWER: Attempting to initialise PSTAT power management... 00:20:23.809 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:23.809 POWER: Cannot set governor of lcore 0 to performance 00:20:23.809 POWER: Attempting to initialise AMD PSTATE power management... 00:20:23.809 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:23.809 POWER: Cannot set governor of lcore 0 to userspace 00:20:23.809 POWER: Attempting to initialise CPPC power management... 00:20:23.809 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:23.809 POWER: Cannot set governor of lcore 0 to userspace 00:20:23.809 POWER: Attempting to initialise VM power management... 00:20:23.809 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:20:23.809 POWER: Unable to set Power Management Environment for lcore 0 00:20:23.809 [2024-07-22 15:57:26.412267] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:20:23.809 [2024-07-22 15:57:26.412288] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:20:23.809 [2024-07-22 15:57:26.412297] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:20:23.809 [2024-07-22 15:57:26.412311] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:20:23.809 [2024-07-22 15:57:26.412319] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:20:23.809 [2024-07-22 15:57:26.412326] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:20:23.809 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.809 15:57:26 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:20:23.809 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.809 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.809 [2024-07-22 15:57:26.467084] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:20:23.809 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.809 15:57:26 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:20:23.809 15:57:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:23.809 15:57:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:23.809 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.809 ************************************ 00:20:23.809 START TEST scheduler_create_thread 00:20:23.809 ************************************ 00:20:23.809 15:57:26 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:20:23.809 15:57:26 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:20:23.809 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.809 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.809 2 00:20:23.809 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.809 15:57:26 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 3 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 4 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 5 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 6 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 7 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 8 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 9 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 10 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 15:57:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:20:23.810 15:57:26 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:20:23.810 15:57:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.810 15:57:26 -- common/autotest_common.sh@10 -- # set +x 00:20:24.377 15:57:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.377 00:20:24.377 ************************************ 00:20:24.377 END TEST scheduler_create_thread 00:20:24.377 ************************************ 00:20:24.377 real 0m0.593s 00:20:24.377 user 0m0.014s 00:20:24.377 sys 0m0.006s 00:20:24.377 15:57:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.377 15:57:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.377 15:57:27 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:20:24.377 15:57:27 -- scheduler/scheduler.sh@46 -- # killprocess 54766 00:20:24.377 15:57:27 -- common/autotest_common.sh@926 -- # '[' -z 54766 ']' 00:20:24.377 15:57:27 -- common/autotest_common.sh@930 -- # kill -0 54766 00:20:24.377 15:57:27 -- common/autotest_common.sh@931 -- # uname 00:20:24.377 15:57:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:24.377 15:57:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54766 00:20:24.377 15:57:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:24.377 killing process with pid 54766 00:20:24.377 15:57:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:24.377 15:57:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54766' 00:20:24.377 15:57:27 -- common/autotest_common.sh@945 -- # kill 54766 00:20:24.377 15:57:27 -- common/autotest_common.sh@950 -- # wait 54766 00:20:24.943 [2024-07-22 15:57:27.548976] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:20:24.943 00:20:24.943 real 0m2.588s 00:20:24.943 user 0m5.856s 00:20:24.943 sys 0m0.301s 00:20:24.943 15:57:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.943 ************************************ 00:20:24.943 END TEST event_scheduler 00:20:24.943 ************************************ 00:20:24.943 15:57:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 15:57:27 -- event/event.sh@51 -- # modprobe -n nbd 00:20:24.943 15:57:27 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:20:24.943 15:57:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:24.943 15:57:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:24.943 15:57:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 ************************************ 00:20:24.943 START TEST app_repeat 00:20:24.943 ************************************ 00:20:24.943 15:57:27 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:20:24.943 15:57:27 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:24.943 15:57:27 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:24.943 15:57:27 -- event/event.sh@13 -- # local nbd_list 00:20:24.943 15:57:27 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:24.943 15:57:27 -- event/event.sh@14 -- # local bdev_list 00:20:24.943 15:57:27 -- event/event.sh@15 -- # local repeat_times=4 00:20:24.943 15:57:27 -- event/event.sh@17 -- # modprobe nbd 00:20:24.943 Process app_repeat pid: 54843 00:20:24.943 spdk_app_start Round 0 00:20:24.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:24.943 15:57:27 -- event/event.sh@19 -- # repeat_pid=54843 00:20:24.943 15:57:27 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:20:24.943 15:57:27 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:20:24.943 15:57:27 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54843' 00:20:24.943 15:57:27 -- event/event.sh@23 -- # for i in {0..2} 00:20:24.943 15:57:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:20:24.943 15:57:27 -- event/event.sh@25 -- # waitforlisten 54843 /var/tmp/spdk-nbd.sock 00:20:24.943 15:57:27 -- common/autotest_common.sh@819 -- # '[' -z 54843 ']' 00:20:24.943 15:57:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:24.943 15:57:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:24.943 15:57:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:24.943 15:57:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:24.943 15:57:27 -- common/autotest_common.sh@10 -- # set +x 00:20:25.201 [2024-07-22 15:57:27.810920] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:25.201 [2024-07-22 15:57:27.811037] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54843 ] 00:20:25.201 [2024-07-22 15:57:27.953447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:25.201 [2024-07-22 15:57:28.011624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.201 [2024-07-22 15:57:28.011634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.134 15:57:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:26.134 15:57:28 -- common/autotest_common.sh@852 -- # return 0 00:20:26.134 15:57:28 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:26.393 Malloc0 00:20:26.393 15:57:29 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:26.651 Malloc1 00:20:26.651 15:57:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:26.651 15:57:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:26.651 15:57:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:26.651 15:57:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:26.651 15:57:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@12 -- # local i 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:26.652 15:57:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:20:26.910 /dev/nbd0 00:20:26.910 15:57:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:26.910 15:57:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:26.910 15:57:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:26.910 15:57:29 -- common/autotest_common.sh@857 -- # local i 00:20:26.910 15:57:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:26.910 15:57:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:26.910 15:57:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:26.910 15:57:29 -- common/autotest_common.sh@861 -- # break 00:20:26.910 15:57:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:26.910 15:57:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:26.910 15:57:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:26.910 1+0 records in 00:20:26.910 1+0 records out 00:20:26.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525073 s, 7.8 MB/s 00:20:26.910 15:57:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:26.910 15:57:29 -- common/autotest_common.sh@874 -- # size=4096 00:20:26.910 15:57:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:26.910 15:57:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:26.910 15:57:29 -- common/autotest_common.sh@877 -- # return 0 00:20:26.910 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.910 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:26.910 15:57:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:20:27.476 /dev/nbd1 00:20:27.476 15:57:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:27.476 15:57:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:27.476 15:57:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:27.476 15:57:30 -- common/autotest_common.sh@857 -- # local i 00:20:27.476 15:57:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:27.476 15:57:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:27.476 15:57:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:27.476 15:57:30 -- common/autotest_common.sh@861 -- # break 00:20:27.476 15:57:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:27.476 15:57:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:27.476 15:57:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:27.476 1+0 records in 00:20:27.476 1+0 records out 00:20:27.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414461 s, 9.9 MB/s 00:20:27.476 15:57:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:27.476 15:57:30 -- common/autotest_common.sh@874 -- # size=4096 00:20:27.476 15:57:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:27.476 15:57:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:27.476 15:57:30 -- common/autotest_common.sh@877 -- # return 0 00:20:27.476 15:57:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:27.476 15:57:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:27.476 15:57:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:27.476 15:57:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:27.476 15:57:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:27.735 { 00:20:27.735 "nbd_device": "/dev/nbd0", 00:20:27.735 "bdev_name": "Malloc0" 00:20:27.735 }, 00:20:27.735 { 00:20:27.735 "nbd_device": "/dev/nbd1", 00:20:27.735 "bdev_name": "Malloc1" 00:20:27.735 } 00:20:27.735 ]' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:27.735 { 00:20:27.735 "nbd_device": "/dev/nbd0", 00:20:27.735 "bdev_name": "Malloc0" 00:20:27.735 }, 00:20:27.735 { 00:20:27.735 "nbd_device": "/dev/nbd1", 00:20:27.735 "bdev_name": "Malloc1" 00:20:27.735 } 00:20:27.735 ]' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:27.735 /dev/nbd1' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:27.735 /dev/nbd1' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@65 -- # count=2 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@95 -- # count=2 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:20:27.735 256+0 records in 00:20:27.735 256+0 records out 00:20:27.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0071721 s, 146 MB/s 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:27.735 256+0 records in 00:20:27.735 256+0 records out 00:20:27.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0378872 s, 27.7 MB/s 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:27.735 256+0 records in 00:20:27.735 256+0 records out 00:20:27.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319278 s, 32.8 MB/s 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@51 -- # local i 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:27.735 15:57:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:28.302 15:57:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:28.303 15:57:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:28.303 15:57:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:28.303 15:57:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.303 15:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.303 15:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:28.303 15:57:30 -- bdev/nbd_common.sh@41 -- # break 00:20:28.303 15:57:30 -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.303 15:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.303 15:57:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@41 -- # break 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@45 -- # return 0 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:28.303 15:57:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:28.562 15:57:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:28.562 15:57:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:28.562 15:57:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:28.820 15:57:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:28.820 15:57:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:20:28.820 15:57:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:28.820 15:57:31 -- bdev/nbd_common.sh@65 -- # true 00:20:28.820 15:57:31 -- bdev/nbd_common.sh@65 -- # count=0 00:20:28.820 15:57:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:20:28.820 15:57:31 -- bdev/nbd_common.sh@104 -- # count=0 00:20:28.820 15:57:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:28.820 15:57:31 -- bdev/nbd_common.sh@109 -- # return 0 00:20:28.820 15:57:31 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:20:29.077 15:57:31 -- event/event.sh@35 -- # sleep 3 00:20:29.077 [2024-07-22 15:57:31.833259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:29.077 [2024-07-22 15:57:31.892892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.077 [2024-07-22 15:57:31.892904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.077 [2024-07-22 15:57:31.923684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:20:29.077 [2024-07-22 15:57:31.923737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:20:32.357 spdk_app_start Round 1 00:20:32.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:32.357 15:57:34 -- event/event.sh@23 -- # for i in {0..2} 00:20:32.357 15:57:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:20:32.358 15:57:34 -- event/event.sh@25 -- # waitforlisten 54843 /var/tmp/spdk-nbd.sock 00:20:32.358 15:57:34 -- common/autotest_common.sh@819 -- # '[' -z 54843 ']' 00:20:32.358 15:57:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:32.358 15:57:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:32.358 15:57:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:32.358 15:57:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:32.358 15:57:34 -- common/autotest_common.sh@10 -- # set +x 00:20:32.358 15:57:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:32.358 15:57:34 -- common/autotest_common.sh@852 -- # return 0 00:20:32.358 15:57:34 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:32.628 Malloc0 00:20:32.628 15:57:35 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:32.900 Malloc1 00:20:32.900 15:57:35 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@12 -- # local i 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:32.900 15:57:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:20:33.159 /dev/nbd0 00:20:33.417 15:57:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:33.417 15:57:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:33.417 15:57:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:33.417 15:57:36 -- common/autotest_common.sh@857 -- # local i 00:20:33.417 15:57:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:33.417 15:57:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:33.417 15:57:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:33.417 15:57:36 -- common/autotest_common.sh@861 -- # break 00:20:33.417 15:57:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:33.417 15:57:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:33.417 15:57:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:33.417 1+0 records in 00:20:33.417 1+0 records out 00:20:33.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533853 s, 7.7 MB/s 00:20:33.417 15:57:36 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:33.417 15:57:36 -- common/autotest_common.sh@874 -- # size=4096 00:20:33.417 15:57:36 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:33.417 15:57:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:33.417 15:57:36 -- common/autotest_common.sh@877 -- # return 0 00:20:33.417 15:57:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:33.417 15:57:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:33.417 15:57:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:20:33.676 /dev/nbd1 00:20:33.676 15:57:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:33.676 15:57:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:33.676 15:57:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:33.676 15:57:36 -- common/autotest_common.sh@857 -- # local i 00:20:33.676 15:57:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:33.676 15:57:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:33.676 15:57:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:33.676 15:57:36 -- common/autotest_common.sh@861 -- # break 00:20:33.676 15:57:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:33.676 15:57:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:33.676 15:57:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:33.676 1+0 records in 00:20:33.676 1+0 records out 00:20:33.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445685 s, 9.2 MB/s 00:20:33.676 15:57:36 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:33.676 15:57:36 -- common/autotest_common.sh@874 -- # size=4096 00:20:33.676 15:57:36 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:33.676 15:57:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:33.676 15:57:36 -- common/autotest_common.sh@877 -- # return 0 00:20:33.676 15:57:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:33.676 15:57:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:33.676 15:57:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:33.676 15:57:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:33.676 15:57:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:33.934 { 00:20:33.934 "nbd_device": "/dev/nbd0", 00:20:33.934 "bdev_name": "Malloc0" 00:20:33.934 }, 00:20:33.934 { 00:20:33.934 "nbd_device": "/dev/nbd1", 00:20:33.934 "bdev_name": "Malloc1" 00:20:33.934 } 00:20:33.934 ]' 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:33.934 { 00:20:33.934 "nbd_device": "/dev/nbd0", 00:20:33.934 "bdev_name": "Malloc0" 00:20:33.934 }, 00:20:33.934 { 00:20:33.934 "nbd_device": "/dev/nbd1", 00:20:33.934 "bdev_name": "Malloc1" 00:20:33.934 } 00:20:33.934 ]' 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:33.934 /dev/nbd1' 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:33.934 /dev/nbd1' 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@65 -- # count=2 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@66 -- # echo 2 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@95 -- # count=2 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:33.934 15:57:36 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:20:34.193 256+0 records in 00:20:34.193 256+0 records out 00:20:34.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00580513 s, 181 MB/s 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:34.193 256+0 records in 00:20:34.193 256+0 records out 00:20:34.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262084 s, 40.0 MB/s 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:34.193 256+0 records in 00:20:34.193 256+0 records out 00:20:34.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0392762 s, 26.7 MB/s 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@51 -- # local i 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.193 15:57:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@41 -- # break 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.451 15:57:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:34.710 15:57:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:34.710 15:57:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:34.710 15:57:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:34.710 15:57:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:34.710 15:57:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:34.710 15:57:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:34.710 15:57:37 -- bdev/nbd_common.sh@41 -- # break 00:20:34.710 15:57:37 -- bdev/nbd_common.sh@45 -- # return 0 00:20:34.967 15:57:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:34.967 15:57:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:34.967 15:57:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:34.967 15:57:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:34.967 15:57:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:34.967 15:57:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:35.225 15:57:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:35.225 15:57:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:20:35.225 15:57:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:35.225 15:57:37 -- bdev/nbd_common.sh@65 -- # true 00:20:35.225 15:57:37 -- bdev/nbd_common.sh@65 -- # count=0 00:20:35.225 15:57:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:20:35.225 15:57:37 -- bdev/nbd_common.sh@104 -- # count=0 00:20:35.225 15:57:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:35.225 15:57:37 -- bdev/nbd_common.sh@109 -- # return 0 00:20:35.225 15:57:37 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:20:35.484 15:57:38 -- event/event.sh@35 -- # sleep 3 00:20:35.484 [2024-07-22 15:57:38.281992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:35.484 [2024-07-22 15:57:38.340530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.484 [2024-07-22 15:57:38.340535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.742 [2024-07-22 15:57:38.370184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:20:35.742 [2024-07-22 15:57:38.370254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:20:39.024 spdk_app_start Round 2 00:20:39.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:39.024 15:57:41 -- event/event.sh@23 -- # for i in {0..2} 00:20:39.024 15:57:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:20:39.024 15:57:41 -- event/event.sh@25 -- # waitforlisten 54843 /var/tmp/spdk-nbd.sock 00:20:39.024 15:57:41 -- common/autotest_common.sh@819 -- # '[' -z 54843 ']' 00:20:39.024 15:57:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:39.024 15:57:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:39.025 15:57:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:39.025 15:57:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:39.025 15:57:41 -- common/autotest_common.sh@10 -- # set +x 00:20:39.025 15:57:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:39.025 15:57:41 -- common/autotest_common.sh@852 -- # return 0 00:20:39.025 15:57:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:39.025 Malloc0 00:20:39.025 15:57:41 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:39.283 Malloc1 00:20:39.283 15:57:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@12 -- # local i 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:39.283 15:57:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:20:39.541 /dev/nbd0 00:20:39.542 15:57:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:39.542 15:57:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:39.542 15:57:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:39.542 15:57:42 -- common/autotest_common.sh@857 -- # local i 00:20:39.542 15:57:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:39.542 15:57:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:39.542 15:57:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:39.542 15:57:42 -- common/autotest_common.sh@861 -- # break 00:20:39.542 15:57:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:39.542 15:57:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:39.542 15:57:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:39.542 1+0 records in 00:20:39.542 1+0 records out 00:20:39.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556658 s, 7.4 MB/s 00:20:39.542 15:57:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:39.542 15:57:42 -- common/autotest_common.sh@874 -- # size=4096 00:20:39.542 15:57:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:39.542 15:57:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:39.542 15:57:42 -- common/autotest_common.sh@877 -- # return 0 00:20:39.542 15:57:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:39.542 15:57:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:39.542 15:57:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:20:39.800 /dev/nbd1 00:20:39.800 15:57:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:39.800 15:57:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:39.800 15:57:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:39.800 15:57:42 -- common/autotest_common.sh@857 -- # local i 00:20:39.800 15:57:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:39.800 15:57:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:39.800 15:57:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:39.800 15:57:42 -- common/autotest_common.sh@861 -- # break 00:20:39.800 15:57:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:39.800 15:57:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:39.800 15:57:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:39.800 1+0 records in 00:20:39.800 1+0 records out 00:20:39.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369641 s, 11.1 MB/s 00:20:39.800 15:57:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:39.800 15:57:42 -- common/autotest_common.sh@874 -- # size=4096 00:20:39.800 15:57:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:39.800 15:57:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:39.800 15:57:42 -- common/autotest_common.sh@877 -- # return 0 00:20:39.800 15:57:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:39.800 15:57:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:39.800 15:57:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:39.800 15:57:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:39.800 15:57:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:40.058 15:57:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:40.058 { 00:20:40.058 "nbd_device": "/dev/nbd0", 00:20:40.058 "bdev_name": "Malloc0" 00:20:40.058 }, 00:20:40.058 { 00:20:40.058 "nbd_device": "/dev/nbd1", 00:20:40.058 "bdev_name": "Malloc1" 00:20:40.058 } 00:20:40.058 ]' 00:20:40.058 15:57:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:40.058 { 00:20:40.058 "nbd_device": "/dev/nbd0", 00:20:40.058 "bdev_name": "Malloc0" 00:20:40.058 }, 00:20:40.058 { 00:20:40.058 "nbd_device": "/dev/nbd1", 00:20:40.058 "bdev_name": "Malloc1" 00:20:40.058 } 00:20:40.058 ]' 00:20:40.058 15:57:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:40.059 /dev/nbd1' 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:40.059 /dev/nbd1' 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@65 -- # count=2 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@95 -- # count=2 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:20:40.059 256+0 records in 00:20:40.059 256+0 records out 00:20:40.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00750213 s, 140 MB/s 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:40.059 256+0 records in 00:20:40.059 256+0 records out 00:20:40.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265476 s, 39.5 MB/s 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:40.059 256+0 records in 00:20:40.059 256+0 records out 00:20:40.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245399 s, 42.7 MB/s 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:40.059 15:57:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@51 -- # local i 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.317 15:57:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@41 -- # break 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.574 15:57:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@41 -- # break 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:40.832 15:57:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@65 -- # true 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@65 -- # count=0 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@104 -- # count=0 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:41.089 15:57:43 -- bdev/nbd_common.sh@109 -- # return 0 00:20:41.089 15:57:43 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:20:41.656 15:57:44 -- event/event.sh@35 -- # sleep 3 00:20:41.656 [2024-07-22 15:57:44.357397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:41.656 [2024-07-22 15:57:44.414957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.656 [2024-07-22 15:57:44.414967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.656 [2024-07-22 15:57:44.444730] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:20:41.656 [2024-07-22 15:57:44.444792] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:20:44.938 15:57:47 -- event/event.sh@38 -- # waitforlisten 54843 /var/tmp/spdk-nbd.sock 00:20:44.938 15:57:47 -- common/autotest_common.sh@819 -- # '[' -z 54843 ']' 00:20:44.938 15:57:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:44.938 15:57:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:44.938 15:57:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:44.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:44.938 15:57:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:44.938 15:57:47 -- common/autotest_common.sh@10 -- # set +x 00:20:44.939 15:57:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:44.939 15:57:47 -- common/autotest_common.sh@852 -- # return 0 00:20:44.939 15:57:47 -- event/event.sh@39 -- # killprocess 54843 00:20:44.939 15:57:47 -- common/autotest_common.sh@926 -- # '[' -z 54843 ']' 00:20:44.939 15:57:47 -- common/autotest_common.sh@930 -- # kill -0 54843 00:20:44.939 15:57:47 -- common/autotest_common.sh@931 -- # uname 00:20:44.939 15:57:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:44.939 15:57:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54843 00:20:44.939 15:57:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:44.939 15:57:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:44.939 15:57:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54843' 00:20:44.939 killing process with pid 54843 00:20:44.939 15:57:47 -- common/autotest_common.sh@945 -- # kill 54843 00:20:44.939 15:57:47 -- common/autotest_common.sh@950 -- # wait 54843 00:20:44.939 spdk_app_start is called in Round 0. 00:20:44.939 Shutdown signal received, stop current app iteration 00:20:44.939 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:20:44.939 spdk_app_start is called in Round 1. 00:20:44.939 Shutdown signal received, stop current app iteration 00:20:44.939 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:20:44.939 spdk_app_start is called in Round 2. 00:20:44.939 Shutdown signal received, stop current app iteration 00:20:44.939 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:20:44.939 spdk_app_start is called in Round 3. 00:20:44.939 Shutdown signal received, stop current app iteration 00:20:44.939 ************************************ 00:20:44.939 END TEST app_repeat 00:20:44.939 ************************************ 00:20:44.939 15:57:47 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:20:44.939 15:57:47 -- event/event.sh@42 -- # return 0 00:20:44.939 00:20:44.939 real 0m19.869s 00:20:44.939 user 0m45.340s 00:20:44.939 sys 0m2.830s 00:20:44.939 15:57:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:44.939 15:57:47 -- common/autotest_common.sh@10 -- # set +x 00:20:44.939 15:57:47 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:20:44.939 15:57:47 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:20:44.939 15:57:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:44.939 15:57:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:44.939 15:57:47 -- common/autotest_common.sh@10 -- # set +x 00:20:44.939 ************************************ 00:20:44.939 START TEST cpu_locks 00:20:44.939 ************************************ 00:20:44.939 15:57:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:20:44.939 * Looking for test storage... 00:20:44.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:20:44.939 15:57:47 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:20:44.939 15:57:47 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:20:44.939 15:57:47 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:20:44.939 15:57:47 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:20:44.939 15:57:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:44.939 15:57:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:44.939 15:57:47 -- common/autotest_common.sh@10 -- # set +x 00:20:44.939 ************************************ 00:20:44.939 START TEST default_locks 00:20:44.939 ************************************ 00:20:44.939 15:57:47 -- common/autotest_common.sh@1104 -- # default_locks 00:20:44.939 15:57:47 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55286 00:20:44.939 15:57:47 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:44.939 15:57:47 -- event/cpu_locks.sh@47 -- # waitforlisten 55286 00:20:44.939 15:57:47 -- common/autotest_common.sh@819 -- # '[' -z 55286 ']' 00:20:44.939 15:57:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.939 15:57:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:44.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.939 15:57:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.939 15:57:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:44.939 15:57:47 -- common/autotest_common.sh@10 -- # set +x 00:20:45.197 [2024-07-22 15:57:47.814975] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:45.197 [2024-07-22 15:57:47.815084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55286 ] 00:20:45.197 [2024-07-22 15:57:47.949089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.197 [2024-07-22 15:57:48.011165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:45.197 [2024-07-22 15:57:48.011330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.132 15:57:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:46.132 15:57:48 -- common/autotest_common.sh@852 -- # return 0 00:20:46.132 15:57:48 -- event/cpu_locks.sh@49 -- # locks_exist 55286 00:20:46.132 15:57:48 -- event/cpu_locks.sh@22 -- # lslocks -p 55286 00:20:46.132 15:57:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:46.698 15:57:49 -- event/cpu_locks.sh@50 -- # killprocess 55286 00:20:46.698 15:57:49 -- common/autotest_common.sh@926 -- # '[' -z 55286 ']' 00:20:46.698 15:57:49 -- common/autotest_common.sh@930 -- # kill -0 55286 00:20:46.698 15:57:49 -- common/autotest_common.sh@931 -- # uname 00:20:46.698 15:57:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:46.699 15:57:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55286 00:20:46.699 15:57:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:46.699 killing process with pid 55286 00:20:46.699 15:57:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:46.699 15:57:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55286' 00:20:46.699 15:57:49 -- common/autotest_common.sh@945 -- # kill 55286 00:20:46.699 15:57:49 -- common/autotest_common.sh@950 -- # wait 55286 00:20:46.958 15:57:49 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55286 00:20:46.958 15:57:49 -- common/autotest_common.sh@640 -- # local es=0 00:20:46.958 15:57:49 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55286 00:20:46.958 15:57:49 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:20:46.958 15:57:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:46.958 15:57:49 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:20:46.958 15:57:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:46.958 15:57:49 -- common/autotest_common.sh@643 -- # waitforlisten 55286 00:20:46.958 15:57:49 -- common/autotest_common.sh@819 -- # '[' -z 55286 ']' 00:20:46.958 15:57:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.958 15:57:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:46.958 15:57:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.958 15:57:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:46.958 15:57:49 -- common/autotest_common.sh@10 -- # set +x 00:20:46.958 ERROR: process (pid: 55286) is no longer running 00:20:46.958 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55286) - No such process 00:20:46.958 15:57:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:46.958 15:57:49 -- common/autotest_common.sh@852 -- # return 1 00:20:46.958 15:57:49 -- common/autotest_common.sh@643 -- # es=1 00:20:46.958 15:57:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:46.958 15:57:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:46.958 15:57:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:46.958 15:57:49 -- event/cpu_locks.sh@54 -- # no_locks 00:20:46.958 15:57:49 -- event/cpu_locks.sh@26 -- # lock_files=() 00:20:46.958 15:57:49 -- event/cpu_locks.sh@26 -- # local lock_files 00:20:46.958 15:57:49 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:20:46.958 00:20:46.958 real 0m1.941s 00:20:46.958 user 0m2.281s 00:20:46.958 sys 0m0.517s 00:20:46.958 15:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.958 ************************************ 00:20:46.958 END TEST default_locks 00:20:46.958 ************************************ 00:20:46.958 15:57:49 -- common/autotest_common.sh@10 -- # set +x 00:20:46.958 15:57:49 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:20:46.958 15:57:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:46.958 15:57:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:46.958 15:57:49 -- common/autotest_common.sh@10 -- # set +x 00:20:46.958 ************************************ 00:20:46.958 START TEST default_locks_via_rpc 00:20:46.958 ************************************ 00:20:46.958 15:57:49 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:20:46.958 15:57:49 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55338 00:20:46.958 15:57:49 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:46.958 15:57:49 -- event/cpu_locks.sh@63 -- # waitforlisten 55338 00:20:46.958 15:57:49 -- common/autotest_common.sh@819 -- # '[' -z 55338 ']' 00:20:46.958 15:57:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.958 15:57:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:46.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.958 15:57:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.958 15:57:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:46.958 15:57:49 -- common/autotest_common.sh@10 -- # set +x 00:20:46.958 [2024-07-22 15:57:49.816915] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:46.958 [2024-07-22 15:57:49.817037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55338 ] 00:20:47.216 [2024-07-22 15:57:49.957076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.216 [2024-07-22 15:57:50.024249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:47.216 [2024-07-22 15:57:50.024417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.150 15:57:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:48.150 15:57:50 -- common/autotest_common.sh@852 -- # return 0 00:20:48.150 15:57:50 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:20:48.150 15:57:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:48.150 15:57:50 -- common/autotest_common.sh@10 -- # set +x 00:20:48.150 15:57:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:48.150 15:57:50 -- event/cpu_locks.sh@67 -- # no_locks 00:20:48.150 15:57:50 -- event/cpu_locks.sh@26 -- # lock_files=() 00:20:48.150 15:57:50 -- event/cpu_locks.sh@26 -- # local lock_files 00:20:48.150 15:57:50 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:20:48.150 15:57:50 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:20:48.150 15:57:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:48.150 15:57:50 -- common/autotest_common.sh@10 -- # set +x 00:20:48.150 15:57:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:48.150 15:57:50 -- event/cpu_locks.sh@71 -- # locks_exist 55338 00:20:48.150 15:57:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:48.150 15:57:50 -- event/cpu_locks.sh@22 -- # lslocks -p 55338 00:20:48.427 15:57:51 -- event/cpu_locks.sh@73 -- # killprocess 55338 00:20:48.427 15:57:51 -- common/autotest_common.sh@926 -- # '[' -z 55338 ']' 00:20:48.427 15:57:51 -- common/autotest_common.sh@930 -- # kill -0 55338 00:20:48.427 15:57:51 -- common/autotest_common.sh@931 -- # uname 00:20:48.427 15:57:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:48.427 15:57:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55338 00:20:48.693 15:57:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:48.693 killing process with pid 55338 00:20:48.693 15:57:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:48.693 15:57:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55338' 00:20:48.693 15:57:51 -- common/autotest_common.sh@945 -- # kill 55338 00:20:48.693 15:57:51 -- common/autotest_common.sh@950 -- # wait 55338 00:20:48.952 00:20:48.952 real 0m1.828s 00:20:48.952 user 0m2.100s 00:20:48.952 sys 0m0.476s 00:20:48.952 ************************************ 00:20:48.952 END TEST default_locks_via_rpc 00:20:48.952 ************************************ 00:20:48.952 15:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:48.952 15:57:51 -- common/autotest_common.sh@10 -- # set +x 00:20:48.952 15:57:51 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:20:48.952 15:57:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:48.952 15:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:48.952 15:57:51 -- common/autotest_common.sh@10 -- # set +x 00:20:48.952 ************************************ 00:20:48.952 START TEST non_locking_app_on_locked_coremask 00:20:48.952 ************************************ 00:20:48.952 15:57:51 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:20:48.952 15:57:51 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55389 00:20:48.952 15:57:51 -- event/cpu_locks.sh@81 -- # waitforlisten 55389 /var/tmp/spdk.sock 00:20:48.952 15:57:51 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:48.952 15:57:51 -- common/autotest_common.sh@819 -- # '[' -z 55389 ']' 00:20:48.952 15:57:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.952 15:57:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:48.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.952 15:57:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.952 15:57:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:48.952 15:57:51 -- common/autotest_common.sh@10 -- # set +x 00:20:48.952 [2024-07-22 15:57:51.692774] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:48.953 [2024-07-22 15:57:51.692921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55389 ] 00:20:49.211 [2024-07-22 15:57:51.839103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.211 [2024-07-22 15:57:51.924681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:49.211 [2024-07-22 15:57:51.924904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.144 15:57:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:50.144 15:57:52 -- common/autotest_common.sh@852 -- # return 0 00:20:50.144 15:57:52 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55405 00:20:50.144 15:57:52 -- event/cpu_locks.sh@85 -- # waitforlisten 55405 /var/tmp/spdk2.sock 00:20:50.144 15:57:52 -- common/autotest_common.sh@819 -- # '[' -z 55405 ']' 00:20:50.144 15:57:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:50.144 15:57:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:50.144 15:57:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:50.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:50.144 15:57:52 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:20:50.144 15:57:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:50.144 15:57:52 -- common/autotest_common.sh@10 -- # set +x 00:20:50.144 [2024-07-22 15:57:52.713827] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:50.144 [2024-07-22 15:57:52.713945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55405 ] 00:20:50.144 [2024-07-22 15:57:52.859895] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:50.144 [2024-07-22 15:57:52.859957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.144 [2024-07-22 15:57:52.980193] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:50.144 [2024-07-22 15:57:52.980365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.078 15:57:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:51.078 15:57:53 -- common/autotest_common.sh@852 -- # return 0 00:20:51.078 15:57:53 -- event/cpu_locks.sh@87 -- # locks_exist 55389 00:20:51.078 15:57:53 -- event/cpu_locks.sh@22 -- # lslocks -p 55389 00:20:51.078 15:57:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:52.033 15:57:54 -- event/cpu_locks.sh@89 -- # killprocess 55389 00:20:52.033 15:57:54 -- common/autotest_common.sh@926 -- # '[' -z 55389 ']' 00:20:52.033 15:57:54 -- common/autotest_common.sh@930 -- # kill -0 55389 00:20:52.033 15:57:54 -- common/autotest_common.sh@931 -- # uname 00:20:52.033 15:57:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:52.033 15:57:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55389 00:20:52.034 killing process with pid 55389 00:20:52.034 15:57:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:52.034 15:57:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:52.034 15:57:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55389' 00:20:52.034 15:57:54 -- common/autotest_common.sh@945 -- # kill 55389 00:20:52.034 15:57:54 -- common/autotest_common.sh@950 -- # wait 55389 00:20:52.291 15:57:55 -- event/cpu_locks.sh@90 -- # killprocess 55405 00:20:52.291 15:57:55 -- common/autotest_common.sh@926 -- # '[' -z 55405 ']' 00:20:52.291 15:57:55 -- common/autotest_common.sh@930 -- # kill -0 55405 00:20:52.549 15:57:55 -- common/autotest_common.sh@931 -- # uname 00:20:52.549 15:57:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:52.549 15:57:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55405 00:20:52.549 killing process with pid 55405 00:20:52.549 15:57:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:52.549 15:57:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:52.549 15:57:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55405' 00:20:52.549 15:57:55 -- common/autotest_common.sh@945 -- # kill 55405 00:20:52.549 15:57:55 -- common/autotest_common.sh@950 -- # wait 55405 00:20:52.807 ************************************ 00:20:52.807 END TEST non_locking_app_on_locked_coremask 00:20:52.807 ************************************ 00:20:52.807 00:20:52.807 real 0m3.845s 00:20:52.807 user 0m4.546s 00:20:52.807 sys 0m0.917s 00:20:52.807 15:57:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.807 15:57:55 -- common/autotest_common.sh@10 -- # set +x 00:20:52.807 15:57:55 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:20:52.807 15:57:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:52.807 15:57:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:52.807 15:57:55 -- common/autotest_common.sh@10 -- # set +x 00:20:52.807 ************************************ 00:20:52.807 START TEST locking_app_on_unlocked_coremask 00:20:52.807 ************************************ 00:20:52.807 15:57:55 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:20:52.807 15:57:55 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55467 00:20:52.807 15:57:55 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:20:52.807 15:57:55 -- event/cpu_locks.sh@99 -- # waitforlisten 55467 /var/tmp/spdk.sock 00:20:52.807 15:57:55 -- common/autotest_common.sh@819 -- # '[' -z 55467 ']' 00:20:52.807 15:57:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.807 15:57:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:52.807 15:57:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.807 15:57:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:52.807 15:57:55 -- common/autotest_common.sh@10 -- # set +x 00:20:52.807 [2024-07-22 15:57:55.575991] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:52.807 [2024-07-22 15:57:55.576097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55467 ] 00:20:53.067 [2024-07-22 15:57:55.713796] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:53.067 [2024-07-22 15:57:55.713860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.067 [2024-07-22 15:57:55.773605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:53.067 [2024-07-22 15:57:55.773773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:54.004 15:57:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:54.004 15:57:56 -- common/autotest_common.sh@852 -- # return 0 00:20:54.004 15:57:56 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55484 00:20:54.004 15:57:56 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:20:54.004 15:57:56 -- event/cpu_locks.sh@103 -- # waitforlisten 55484 /var/tmp/spdk2.sock 00:20:54.004 15:57:56 -- common/autotest_common.sh@819 -- # '[' -z 55484 ']' 00:20:54.004 15:57:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:54.004 15:57:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:54.004 15:57:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:54.004 15:57:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:54.004 15:57:56 -- common/autotest_common.sh@10 -- # set +x 00:20:54.004 [2024-07-22 15:57:56.708028] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:54.004 [2024-07-22 15:57:56.708179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55484 ] 00:20:54.004 [2024-07-22 15:57:56.860400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.262 [2024-07-22 15:57:56.980868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:54.262 [2024-07-22 15:57:56.981051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.197 15:57:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:55.197 15:57:57 -- common/autotest_common.sh@852 -- # return 0 00:20:55.197 15:57:57 -- event/cpu_locks.sh@105 -- # locks_exist 55484 00:20:55.197 15:57:57 -- event/cpu_locks.sh@22 -- # lslocks -p 55484 00:20:55.197 15:57:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:56.169 15:57:58 -- event/cpu_locks.sh@107 -- # killprocess 55467 00:20:56.169 15:57:58 -- common/autotest_common.sh@926 -- # '[' -z 55467 ']' 00:20:56.169 15:57:58 -- common/autotest_common.sh@930 -- # kill -0 55467 00:20:56.169 15:57:58 -- common/autotest_common.sh@931 -- # uname 00:20:56.169 15:57:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:56.169 15:57:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55467 00:20:56.169 killing process with pid 55467 00:20:56.169 15:57:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:56.169 15:57:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:56.169 15:57:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55467' 00:20:56.169 15:57:58 -- common/autotest_common.sh@945 -- # kill 55467 00:20:56.169 15:57:58 -- common/autotest_common.sh@950 -- # wait 55467 00:20:56.736 15:57:59 -- event/cpu_locks.sh@108 -- # killprocess 55484 00:20:56.736 15:57:59 -- common/autotest_common.sh@926 -- # '[' -z 55484 ']' 00:20:56.736 15:57:59 -- common/autotest_common.sh@930 -- # kill -0 55484 00:20:56.736 15:57:59 -- common/autotest_common.sh@931 -- # uname 00:20:56.736 15:57:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:56.736 15:57:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55484 00:20:56.736 killing process with pid 55484 00:20:56.736 15:57:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:56.736 15:57:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:56.736 15:57:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55484' 00:20:56.736 15:57:59 -- common/autotest_common.sh@945 -- # kill 55484 00:20:56.736 15:57:59 -- common/autotest_common.sh@950 -- # wait 55484 00:20:56.994 ************************************ 00:20:56.994 END TEST locking_app_on_unlocked_coremask 00:20:56.994 ************************************ 00:20:56.994 00:20:56.994 real 0m4.119s 00:20:56.994 user 0m4.954s 00:20:56.994 sys 0m0.977s 00:20:56.994 15:57:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.994 15:57:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.994 15:57:59 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:20:56.994 15:57:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:56.994 15:57:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:56.994 15:57:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.994 ************************************ 00:20:56.994 START TEST locking_app_on_locked_coremask 00:20:56.994 ************************************ 00:20:56.994 15:57:59 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:20:56.994 15:57:59 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55550 00:20:56.994 15:57:59 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:56.994 15:57:59 -- event/cpu_locks.sh@116 -- # waitforlisten 55550 /var/tmp/spdk.sock 00:20:56.994 15:57:59 -- common/autotest_common.sh@819 -- # '[' -z 55550 ']' 00:20:56.994 15:57:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.994 15:57:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:56.994 15:57:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.994 15:57:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:56.994 15:57:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.994 [2024-07-22 15:57:59.744748] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:56.994 [2024-07-22 15:57:59.744880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55550 ] 00:20:57.252 [2024-07-22 15:57:59.889116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.252 [2024-07-22 15:57:59.951472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:57.252 [2024-07-22 15:57:59.951670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.187 15:58:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:58.187 15:58:00 -- common/autotest_common.sh@852 -- # return 0 00:20:58.187 15:58:00 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55566 00:20:58.187 15:58:00 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:20:58.187 15:58:00 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55566 /var/tmp/spdk2.sock 00:20:58.187 15:58:00 -- common/autotest_common.sh@640 -- # local es=0 00:20:58.187 15:58:00 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55566 /var/tmp/spdk2.sock 00:20:58.187 15:58:00 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:20:58.187 15:58:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:58.187 15:58:00 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:20:58.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:58.187 15:58:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:58.187 15:58:00 -- common/autotest_common.sh@643 -- # waitforlisten 55566 /var/tmp/spdk2.sock 00:20:58.187 15:58:00 -- common/autotest_common.sh@819 -- # '[' -z 55566 ']' 00:20:58.187 15:58:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:58.187 15:58:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:58.187 15:58:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:58.187 15:58:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:58.187 15:58:00 -- common/autotest_common.sh@10 -- # set +x 00:20:58.187 [2024-07-22 15:58:00.805371] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:58.187 [2024-07-22 15:58:00.805527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55566 ] 00:20:58.187 [2024-07-22 15:58:00.955118] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55550 has claimed it. 00:20:58.187 [2024-07-22 15:58:00.955207] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:20:58.753 ERROR: process (pid: 55566) is no longer running 00:20:58.753 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55566) - No such process 00:20:58.753 15:58:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:58.753 15:58:01 -- common/autotest_common.sh@852 -- # return 1 00:20:58.753 15:58:01 -- common/autotest_common.sh@643 -- # es=1 00:20:58.753 15:58:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:58.753 15:58:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:58.753 15:58:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:58.753 15:58:01 -- event/cpu_locks.sh@122 -- # locks_exist 55550 00:20:58.753 15:58:01 -- event/cpu_locks.sh@22 -- # lslocks -p 55550 00:20:58.753 15:58:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:59.335 15:58:02 -- event/cpu_locks.sh@124 -- # killprocess 55550 00:20:59.335 15:58:02 -- common/autotest_common.sh@926 -- # '[' -z 55550 ']' 00:20:59.335 15:58:02 -- common/autotest_common.sh@930 -- # kill -0 55550 00:20:59.335 15:58:02 -- common/autotest_common.sh@931 -- # uname 00:20:59.335 15:58:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.335 15:58:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55550 00:20:59.336 killing process with pid 55550 00:20:59.336 15:58:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:59.336 15:58:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:59.336 15:58:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55550' 00:20:59.336 15:58:02 -- common/autotest_common.sh@945 -- # kill 55550 00:20:59.336 15:58:02 -- common/autotest_common.sh@950 -- # wait 55550 00:20:59.594 00:20:59.594 real 0m2.655s 00:20:59.594 user 0m3.219s 00:20:59.594 sys 0m0.600s 00:20:59.594 15:58:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.594 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:20:59.594 ************************************ 00:20:59.594 END TEST locking_app_on_locked_coremask 00:20:59.594 ************************************ 00:20:59.594 15:58:02 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:20:59.594 15:58:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:59.594 15:58:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:59.594 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:20:59.594 ************************************ 00:20:59.594 START TEST locking_overlapped_coremask 00:20:59.594 ************************************ 00:20:59.594 15:58:02 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:20:59.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.594 15:58:02 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55612 00:20:59.594 15:58:02 -- event/cpu_locks.sh@133 -- # waitforlisten 55612 /var/tmp/spdk.sock 00:20:59.594 15:58:02 -- common/autotest_common.sh@819 -- # '[' -z 55612 ']' 00:20:59.594 15:58:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.594 15:58:02 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:59.594 15:58:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:59.594 15:58:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.594 15:58:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:59.594 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:20:59.594 [2024-07-22 15:58:02.423159] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:59.594 [2024-07-22 15:58:02.423270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55612 ] 00:20:59.852 [2024-07-22 15:58:02.561845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:59.852 [2024-07-22 15:58:02.652030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:59.852 [2024-07-22 15:58:02.652541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.852 [2024-07-22 15:58:02.652633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.852 [2024-07-22 15:58:02.652641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.785 15:58:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:00.785 15:58:03 -- common/autotest_common.sh@852 -- # return 0 00:21:00.785 15:58:03 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55629 00:21:00.785 15:58:03 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:21:00.785 15:58:03 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55629 /var/tmp/spdk2.sock 00:21:00.785 15:58:03 -- common/autotest_common.sh@640 -- # local es=0 00:21:00.785 15:58:03 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55629 /var/tmp/spdk2.sock 00:21:00.785 15:58:03 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:21:00.785 15:58:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:00.785 15:58:03 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:21:00.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:00.785 15:58:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:00.785 15:58:03 -- common/autotest_common.sh@643 -- # waitforlisten 55629 /var/tmp/spdk2.sock 00:21:00.785 15:58:03 -- common/autotest_common.sh@819 -- # '[' -z 55629 ']' 00:21:00.785 15:58:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:00.785 15:58:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:00.785 15:58:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:00.785 15:58:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:00.785 15:58:03 -- common/autotest_common.sh@10 -- # set +x 00:21:00.785 [2024-07-22 15:58:03.383596] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:00.785 [2024-07-22 15:58:03.383675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55629 ] 00:21:00.785 [2024-07-22 15:58:03.527290] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55612 has claimed it. 00:21:00.785 [2024-07-22 15:58:03.527376] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:21:01.352 ERROR: process (pid: 55629) is no longer running 00:21:01.352 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55629) - No such process 00:21:01.352 15:58:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:01.352 15:58:04 -- common/autotest_common.sh@852 -- # return 1 00:21:01.352 15:58:04 -- common/autotest_common.sh@643 -- # es=1 00:21:01.352 15:58:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:01.352 15:58:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:01.352 15:58:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:01.352 15:58:04 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:21:01.352 15:58:04 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:21:01.352 15:58:04 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:21:01.352 15:58:04 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:21:01.352 15:58:04 -- event/cpu_locks.sh@141 -- # killprocess 55612 00:21:01.352 15:58:04 -- common/autotest_common.sh@926 -- # '[' -z 55612 ']' 00:21:01.352 15:58:04 -- common/autotest_common.sh@930 -- # kill -0 55612 00:21:01.352 15:58:04 -- common/autotest_common.sh@931 -- # uname 00:21:01.352 15:58:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:01.352 15:58:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55612 00:21:01.352 killing process with pid 55612 00:21:01.352 15:58:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:01.352 15:58:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:01.352 15:58:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55612' 00:21:01.352 15:58:04 -- common/autotest_common.sh@945 -- # kill 55612 00:21:01.352 15:58:04 -- common/autotest_common.sh@950 -- # wait 55612 00:21:01.611 00:21:01.611 real 0m2.005s 00:21:01.611 user 0m5.542s 00:21:01.611 sys 0m0.310s 00:21:01.611 15:58:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.611 15:58:04 -- common/autotest_common.sh@10 -- # set +x 00:21:01.611 ************************************ 00:21:01.611 END TEST locking_overlapped_coremask 00:21:01.611 ************************************ 00:21:01.611 15:58:04 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:21:01.611 15:58:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:01.611 15:58:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:01.611 15:58:04 -- common/autotest_common.sh@10 -- # set +x 00:21:01.611 ************************************ 00:21:01.611 START TEST locking_overlapped_coremask_via_rpc 00:21:01.611 ************************************ 00:21:01.611 15:58:04 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:21:01.611 15:58:04 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55675 00:21:01.611 15:58:04 -- event/cpu_locks.sh@149 -- # waitforlisten 55675 /var/tmp/spdk.sock 00:21:01.611 15:58:04 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:21:01.611 15:58:04 -- common/autotest_common.sh@819 -- # '[' -z 55675 ']' 00:21:01.611 15:58:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.611 15:58:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.611 15:58:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.611 15:58:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.611 15:58:04 -- common/autotest_common.sh@10 -- # set +x 00:21:01.869 [2024-07-22 15:58:04.475390] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:01.869 [2024-07-22 15:58:04.475480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55675 ] 00:21:01.869 [2024-07-22 15:58:04.609156] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:01.869 [2024-07-22 15:58:04.609642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:01.869 [2024-07-22 15:58:04.680223] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:01.869 [2024-07-22 15:58:04.680875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.869 [2024-07-22 15:58:04.680950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.869 [2024-07-22 15:58:04.680941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.803 15:58:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.803 15:58:05 -- common/autotest_common.sh@852 -- # return 0 00:21:02.803 15:58:05 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:21:02.803 15:58:05 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55693 00:21:02.803 15:58:05 -- event/cpu_locks.sh@153 -- # waitforlisten 55693 /var/tmp/spdk2.sock 00:21:02.803 15:58:05 -- common/autotest_common.sh@819 -- # '[' -z 55693 ']' 00:21:02.803 15:58:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:02.803 15:58:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:02.803 15:58:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:02.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:02.803 15:58:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:02.803 15:58:05 -- common/autotest_common.sh@10 -- # set +x 00:21:02.803 [2024-07-22 15:58:05.590135] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:02.803 [2024-07-22 15:58:05.590237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55693 ] 00:21:03.063 [2024-07-22 15:58:05.736805] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:03.063 [2024-07-22 15:58:05.736863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:03.063 [2024-07-22 15:58:05.855615] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:03.063 [2024-07-22 15:58:05.855913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.063 [2024-07-22 15:58:05.856219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:03.063 [2024-07-22 15:58:05.856221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.998 15:58:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:03.998 15:58:06 -- common/autotest_common.sh@852 -- # return 0 00:21:03.998 15:58:06 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:21:03.998 15:58:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.998 15:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.998 15:58:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.998 15:58:06 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:21:03.998 15:58:06 -- common/autotest_common.sh@640 -- # local es=0 00:21:03.998 15:58:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:21:03.998 15:58:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:03.998 15:58:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:03.998 15:58:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:03.998 15:58:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:03.998 15:58:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:21:03.998 15:58:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.998 15:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:03.998 [2024-07-22 15:58:06.577653] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55675 has claimed it. 00:21:03.998 request: 00:21:03.998 { 00:21:03.998 "method": "framework_enable_cpumask_locks", 00:21:03.998 "req_id": 1 00:21:03.998 } 00:21:03.998 Got JSON-RPC error response 00:21:03.998 response: 00:21:03.998 { 00:21:03.998 "code": -32603, 00:21:03.998 "message": "Failed to claim CPU core: 2" 00:21:03.998 } 00:21:03.998 15:58:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:03.998 15:58:06 -- common/autotest_common.sh@643 -- # es=1 00:21:03.998 15:58:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:03.998 15:58:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:03.998 15:58:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:03.998 15:58:06 -- event/cpu_locks.sh@158 -- # waitforlisten 55675 /var/tmp/spdk.sock 00:21:03.998 15:58:06 -- common/autotest_common.sh@819 -- # '[' -z 55675 ']' 00:21:03.998 15:58:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.998 15:58:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:03.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.998 15:58:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.998 15:58:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:03.998 15:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:04.257 15:58:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:04.257 15:58:06 -- common/autotest_common.sh@852 -- # return 0 00:21:04.257 15:58:06 -- event/cpu_locks.sh@159 -- # waitforlisten 55693 /var/tmp/spdk2.sock 00:21:04.257 15:58:06 -- common/autotest_common.sh@819 -- # '[' -z 55693 ']' 00:21:04.257 15:58:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:04.257 15:58:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:04.257 15:58:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:04.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:04.257 15:58:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:04.257 15:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:04.515 15:58:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:04.515 15:58:07 -- common/autotest_common.sh@852 -- # return 0 00:21:04.515 15:58:07 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:21:04.515 15:58:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:21:04.515 15:58:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:21:04.515 15:58:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:21:04.515 ************************************ 00:21:04.515 END TEST locking_overlapped_coremask_via_rpc 00:21:04.515 ************************************ 00:21:04.515 00:21:04.515 real 0m2.772s 00:21:04.515 user 0m1.508s 00:21:04.515 sys 0m0.178s 00:21:04.515 15:58:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:04.515 15:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:04.515 15:58:07 -- event/cpu_locks.sh@174 -- # cleanup 00:21:04.515 15:58:07 -- event/cpu_locks.sh@15 -- # [[ -z 55675 ]] 00:21:04.515 15:58:07 -- event/cpu_locks.sh@15 -- # killprocess 55675 00:21:04.515 15:58:07 -- common/autotest_common.sh@926 -- # '[' -z 55675 ']' 00:21:04.515 15:58:07 -- common/autotest_common.sh@930 -- # kill -0 55675 00:21:04.515 15:58:07 -- common/autotest_common.sh@931 -- # uname 00:21:04.515 15:58:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:04.515 15:58:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55675 00:21:04.515 killing process with pid 55675 00:21:04.515 15:58:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:04.515 15:58:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:04.515 15:58:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55675' 00:21:04.515 15:58:07 -- common/autotest_common.sh@945 -- # kill 55675 00:21:04.515 15:58:07 -- common/autotest_common.sh@950 -- # wait 55675 00:21:04.773 15:58:07 -- event/cpu_locks.sh@16 -- # [[ -z 55693 ]] 00:21:04.773 15:58:07 -- event/cpu_locks.sh@16 -- # killprocess 55693 00:21:04.774 15:58:07 -- common/autotest_common.sh@926 -- # '[' -z 55693 ']' 00:21:04.774 15:58:07 -- common/autotest_common.sh@930 -- # kill -0 55693 00:21:04.774 15:58:07 -- common/autotest_common.sh@931 -- # uname 00:21:04.774 15:58:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:04.774 15:58:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55693 00:21:04.774 killing process with pid 55693 00:21:04.774 15:58:07 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:04.774 15:58:07 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:04.774 15:58:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55693' 00:21:04.774 15:58:07 -- common/autotest_common.sh@945 -- # kill 55693 00:21:04.774 15:58:07 -- common/autotest_common.sh@950 -- # wait 55693 00:21:05.032 15:58:07 -- event/cpu_locks.sh@18 -- # rm -f 00:21:05.032 Process with pid 55675 is not found 00:21:05.032 15:58:07 -- event/cpu_locks.sh@1 -- # cleanup 00:21:05.032 15:58:07 -- event/cpu_locks.sh@15 -- # [[ -z 55675 ]] 00:21:05.032 15:58:07 -- event/cpu_locks.sh@15 -- # killprocess 55675 00:21:05.032 15:58:07 -- common/autotest_common.sh@926 -- # '[' -z 55675 ']' 00:21:05.032 15:58:07 -- common/autotest_common.sh@930 -- # kill -0 55675 00:21:05.032 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (55675) - No such process 00:21:05.032 15:58:07 -- common/autotest_common.sh@953 -- # echo 'Process with pid 55675 is not found' 00:21:05.032 15:58:07 -- event/cpu_locks.sh@16 -- # [[ -z 55693 ]] 00:21:05.032 Process with pid 55693 is not found 00:21:05.032 15:58:07 -- event/cpu_locks.sh@16 -- # killprocess 55693 00:21:05.032 15:58:07 -- common/autotest_common.sh@926 -- # '[' -z 55693 ']' 00:21:05.032 15:58:07 -- common/autotest_common.sh@930 -- # kill -0 55693 00:21:05.032 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (55693) - No such process 00:21:05.032 15:58:07 -- common/autotest_common.sh@953 -- # echo 'Process with pid 55693 is not found' 00:21:05.032 15:58:07 -- event/cpu_locks.sh@18 -- # rm -f 00:21:05.032 ************************************ 00:21:05.032 END TEST cpu_locks 00:21:05.032 ************************************ 00:21:05.032 00:21:05.032 real 0m20.196s 00:21:05.032 user 0m36.734s 00:21:05.032 sys 0m4.552s 00:21:05.032 15:58:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.032 15:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:05.300 ************************************ 00:21:05.300 END TEST event 00:21:05.300 ************************************ 00:21:05.300 00:21:05.300 real 0m46.979s 00:21:05.300 user 1m34.554s 00:21:05.300 sys 0m8.019s 00:21:05.300 15:58:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.300 15:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:05.300 15:58:07 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:21:05.300 15:58:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:05.300 15:58:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:05.300 15:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:05.300 ************************************ 00:21:05.300 START TEST thread 00:21:05.300 ************************************ 00:21:05.300 15:58:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:21:05.300 * Looking for test storage... 00:21:05.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:21:05.300 15:58:08 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:21:05.300 15:58:08 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:21:05.300 15:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:05.300 15:58:08 -- common/autotest_common.sh@10 -- # set +x 00:21:05.300 ************************************ 00:21:05.300 START TEST thread_poller_perf 00:21:05.300 ************************************ 00:21:05.300 15:58:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:21:05.300 [2024-07-22 15:58:08.056722] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:05.300 [2024-07-22 15:58:08.056857] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55809 ] 00:21:05.563 [2024-07-22 15:58:08.196640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.563 [2024-07-22 15:58:08.278788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.563 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:21:06.938 ====================================== 00:21:06.938 busy:2209026008 (cyc) 00:21:06.938 total_run_count: 288000 00:21:06.938 tsc_hz: 2200000000 (cyc) 00:21:06.938 ====================================== 00:21:06.938 poller_cost: 7670 (cyc), 3486 (nsec) 00:21:06.938 00:21:06.938 real 0m1.346s 00:21:06.938 user 0m1.187s 00:21:06.938 sys 0m0.049s 00:21:06.938 15:58:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.938 15:58:09 -- common/autotest_common.sh@10 -- # set +x 00:21:06.938 ************************************ 00:21:06.938 END TEST thread_poller_perf 00:21:06.938 ************************************ 00:21:06.938 15:58:09 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:21:06.938 15:58:09 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:21:06.938 15:58:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:06.938 15:58:09 -- common/autotest_common.sh@10 -- # set +x 00:21:06.938 ************************************ 00:21:06.938 START TEST thread_poller_perf 00:21:06.938 ************************************ 00:21:06.938 15:58:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:21:06.938 [2024-07-22 15:58:09.444302] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:06.938 [2024-07-22 15:58:09.444404] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55844 ] 00:21:06.938 [2024-07-22 15:58:09.574600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.938 [2024-07-22 15:58:09.645154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.938 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:21:07.878 ====================================== 00:21:07.878 busy:2203130060 (cyc) 00:21:07.878 total_run_count: 3994000 00:21:07.878 tsc_hz: 2200000000 (cyc) 00:21:07.878 ====================================== 00:21:07.878 poller_cost: 551 (cyc), 250 (nsec) 00:21:08.136 00:21:08.136 real 0m1.316s 00:21:08.136 user 0m1.164s 00:21:08.136 sys 0m0.043s 00:21:08.136 15:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.136 15:58:10 -- common/autotest_common.sh@10 -- # set +x 00:21:08.136 ************************************ 00:21:08.136 END TEST thread_poller_perf 00:21:08.136 ************************************ 00:21:08.136 15:58:10 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:21:08.136 00:21:08.136 real 0m2.813s 00:21:08.136 user 0m2.407s 00:21:08.136 sys 0m0.180s 00:21:08.136 ************************************ 00:21:08.136 END TEST thread 00:21:08.136 ************************************ 00:21:08.136 15:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.136 15:58:10 -- common/autotest_common.sh@10 -- # set +x 00:21:08.136 15:58:10 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:21:08.136 15:58:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:08.136 15:58:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:08.136 15:58:10 -- common/autotest_common.sh@10 -- # set +x 00:21:08.136 ************************************ 00:21:08.136 START TEST accel 00:21:08.136 ************************************ 00:21:08.136 15:58:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:21:08.136 * Looking for test storage... 00:21:08.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:21:08.136 15:58:10 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:21:08.136 15:58:10 -- accel/accel.sh@74 -- # get_expected_opcs 00:21:08.136 15:58:10 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:21:08.136 15:58:10 -- accel/accel.sh@59 -- # spdk_tgt_pid=55918 00:21:08.136 15:58:10 -- accel/accel.sh@60 -- # waitforlisten 55918 00:21:08.136 15:58:10 -- common/autotest_common.sh@819 -- # '[' -z 55918 ']' 00:21:08.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.136 15:58:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.136 15:58:10 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:21:08.136 15:58:10 -- accel/accel.sh@58 -- # build_accel_config 00:21:08.136 15:58:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:08.136 15:58:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.136 15:58:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:08.136 15:58:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:08.136 15:58:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:08.136 15:58:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:08.136 15:58:10 -- common/autotest_common.sh@10 -- # set +x 00:21:08.136 15:58:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:08.136 15:58:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:08.136 15:58:10 -- accel/accel.sh@41 -- # local IFS=, 00:21:08.136 15:58:10 -- accel/accel.sh@42 -- # jq -r . 00:21:08.136 [2024-07-22 15:58:10.967175] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:08.136 [2024-07-22 15:58:10.967548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55918 ] 00:21:08.394 [2024-07-22 15:58:11.106285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.394 [2024-07-22 15:58:11.189849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:08.394 [2024-07-22 15:58:11.190372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.327 15:58:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:09.327 15:58:11 -- common/autotest_common.sh@852 -- # return 0 00:21:09.327 15:58:11 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:21:09.327 15:58:11 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:21:09.327 15:58:11 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:21:09.327 15:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:09.327 15:58:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.327 15:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:09.327 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.327 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.327 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.327 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.327 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.327 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.327 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.327 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.327 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.327 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.327 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.327 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.327 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.327 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.327 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.327 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.327 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.327 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # IFS== 00:21:09.328 15:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:21:09.328 15:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:21:09.328 15:58:11 -- accel/accel.sh@67 -- # killprocess 55918 00:21:09.328 15:58:11 -- common/autotest_common.sh@926 -- # '[' -z 55918 ']' 00:21:09.328 15:58:11 -- common/autotest_common.sh@930 -- # kill -0 55918 00:21:09.328 15:58:11 -- common/autotest_common.sh@931 -- # uname 00:21:09.328 15:58:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:09.328 15:58:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55918 00:21:09.328 15:58:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:09.328 15:58:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:09.328 15:58:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55918' 00:21:09.328 killing process with pid 55918 00:21:09.328 15:58:12 -- common/autotest_common.sh@945 -- # kill 55918 00:21:09.328 15:58:12 -- common/autotest_common.sh@950 -- # wait 55918 00:21:09.586 15:58:12 -- accel/accel.sh@68 -- # trap - ERR 00:21:09.586 15:58:12 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:21:09.586 15:58:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:09.586 15:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:09.586 15:58:12 -- common/autotest_common.sh@10 -- # set +x 00:21:09.586 15:58:12 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:21:09.586 15:58:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:21:09.586 15:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:21:09.586 15:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:09.586 15:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:09.586 15:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:09.586 15:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:09.586 15:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:09.586 15:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:21:09.586 15:58:12 -- accel/accel.sh@42 -- # jq -r . 00:21:09.586 15:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:09.586 15:58:12 -- common/autotest_common.sh@10 -- # set +x 00:21:09.586 15:58:12 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:21:09.586 15:58:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:21:09.586 15:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:09.586 15:58:12 -- common/autotest_common.sh@10 -- # set +x 00:21:09.586 ************************************ 00:21:09.586 START TEST accel_missing_filename 00:21:09.586 ************************************ 00:21:09.586 15:58:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:21:09.586 15:58:12 -- common/autotest_common.sh@640 -- # local es=0 00:21:09.586 15:58:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:21:09.586 15:58:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:21:09.586 15:58:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:09.586 15:58:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:21:09.586 15:58:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:09.586 15:58:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:21:09.586 15:58:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:21:09.586 15:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:21:09.586 15:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:09.586 15:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:09.586 15:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:09.586 15:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:09.586 15:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:09.586 15:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:21:09.586 15:58:12 -- accel/accel.sh@42 -- # jq -r . 00:21:09.586 [2024-07-22 15:58:12.382866] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:09.586 [2024-07-22 15:58:12.383019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55968 ] 00:21:09.844 [2024-07-22 15:58:12.520862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.844 [2024-07-22 15:58:12.582116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.844 [2024-07-22 15:58:12.614017] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:09.844 [2024-07-22 15:58:12.654953] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:21:10.101 A filename is required. 00:21:10.101 15:58:12 -- common/autotest_common.sh@643 -- # es=234 00:21:10.101 15:58:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:10.101 15:58:12 -- common/autotest_common.sh@652 -- # es=106 00:21:10.101 15:58:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:21:10.101 15:58:12 -- common/autotest_common.sh@660 -- # es=1 00:21:10.101 15:58:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:10.101 00:21:10.101 real 0m0.399s 00:21:10.101 user 0m0.278s 00:21:10.101 sys 0m0.071s 00:21:10.101 15:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.101 ************************************ 00:21:10.101 END TEST accel_missing_filename 00:21:10.101 ************************************ 00:21:10.101 15:58:12 -- common/autotest_common.sh@10 -- # set +x 00:21:10.101 15:58:12 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:10.101 15:58:12 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:21:10.101 15:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.101 15:58:12 -- common/autotest_common.sh@10 -- # set +x 00:21:10.101 ************************************ 00:21:10.101 START TEST accel_compress_verify 00:21:10.101 ************************************ 00:21:10.101 15:58:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:10.101 15:58:12 -- common/autotest_common.sh@640 -- # local es=0 00:21:10.101 15:58:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:10.101 15:58:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:21:10.101 15:58:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:10.101 15:58:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:21:10.101 15:58:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:10.101 15:58:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:10.101 15:58:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:10.101 15:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:21:10.101 15:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:10.101 15:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:10.101 15:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:10.101 15:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:10.101 15:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:10.101 15:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:21:10.101 15:58:12 -- accel/accel.sh@42 -- # jq -r . 00:21:10.101 [2024-07-22 15:58:12.821833] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:10.101 [2024-07-22 15:58:12.822600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55994 ] 00:21:10.101 [2024-07-22 15:58:12.960992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.358 [2024-07-22 15:58:13.032989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.358 [2024-07-22 15:58:13.063523] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:10.358 [2024-07-22 15:58:13.103776] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:21:10.358 00:21:10.358 Compression does not support the verify option, aborting. 00:21:10.358 15:58:13 -- common/autotest_common.sh@643 -- # es=161 00:21:10.358 15:58:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:10.358 15:58:13 -- common/autotest_common.sh@652 -- # es=33 00:21:10.358 15:58:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:21:10.358 15:58:13 -- common/autotest_common.sh@660 -- # es=1 00:21:10.358 ************************************ 00:21:10.358 END TEST accel_compress_verify 00:21:10.358 ************************************ 00:21:10.358 15:58:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:10.358 00:21:10.358 real 0m0.405s 00:21:10.358 user 0m0.266s 00:21:10.358 sys 0m0.081s 00:21:10.358 15:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.358 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:21:10.615 15:58:13 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:21:10.615 15:58:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:21:10.615 15:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.615 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:21:10.615 ************************************ 00:21:10.615 START TEST accel_wrong_workload 00:21:10.615 ************************************ 00:21:10.615 15:58:13 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:21:10.615 15:58:13 -- common/autotest_common.sh@640 -- # local es=0 00:21:10.615 15:58:13 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:21:10.615 15:58:13 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:21:10.616 15:58:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:10.616 15:58:13 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:21:10.616 15:58:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:10.616 15:58:13 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:21:10.616 15:58:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:21:10.616 15:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:21:10.616 15:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:10.616 15:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:10.616 15:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:10.616 15:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:10.616 15:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:10.616 15:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:21:10.616 15:58:13 -- accel/accel.sh@42 -- # jq -r . 00:21:10.616 Unsupported workload type: foobar 00:21:10.616 [2024-07-22 15:58:13.263024] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:21:10.616 accel_perf options: 00:21:10.616 [-h help message] 00:21:10.616 [-q queue depth per core] 00:21:10.616 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:21:10.616 [-T number of threads per core 00:21:10.616 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:21:10.616 [-t time in seconds] 00:21:10.616 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:21:10.616 [ dif_verify, , dif_generate, dif_generate_copy 00:21:10.616 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:21:10.616 [-l for compress/decompress workloads, name of uncompressed input file 00:21:10.616 [-S for crc32c workload, use this seed value (default 0) 00:21:10.616 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:21:10.616 [-f for fill workload, use this BYTE value (default 255) 00:21:10.616 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:21:10.616 [-y verify result if this switch is on] 00:21:10.616 [-a tasks to allocate per core (default: same value as -q)] 00:21:10.616 Can be used to spread operations across a wider range of memory. 00:21:10.616 15:58:13 -- common/autotest_common.sh@643 -- # es=1 00:21:10.616 15:58:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:10.616 15:58:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:10.616 15:58:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:10.616 00:21:10.616 real 0m0.032s 00:21:10.616 user 0m0.014s 00:21:10.616 sys 0m0.017s 00:21:10.616 ************************************ 00:21:10.616 END TEST accel_wrong_workload 00:21:10.616 ************************************ 00:21:10.616 15:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.616 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:21:10.616 15:58:13 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:21:10.616 15:58:13 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:21:10.616 15:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.616 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:21:10.616 ************************************ 00:21:10.616 START TEST accel_negative_buffers 00:21:10.616 ************************************ 00:21:10.616 15:58:13 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:21:10.616 15:58:13 -- common/autotest_common.sh@640 -- # local es=0 00:21:10.616 15:58:13 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:21:10.616 15:58:13 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:21:10.616 15:58:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:10.616 15:58:13 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:21:10.616 15:58:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:10.616 15:58:13 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:21:10.616 15:58:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:21:10.616 15:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:21:10.616 15:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:10.616 15:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:10.616 15:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:10.616 15:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:10.616 15:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:10.616 15:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:21:10.616 15:58:13 -- accel/accel.sh@42 -- # jq -r . 00:21:10.616 -x option must be non-negative. 00:21:10.616 [2024-07-22 15:58:13.334393] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:21:10.616 accel_perf options: 00:21:10.616 [-h help message] 00:21:10.616 [-q queue depth per core] 00:21:10.616 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:21:10.616 [-T number of threads per core 00:21:10.616 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:21:10.616 [-t time in seconds] 00:21:10.616 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:21:10.616 [ dif_verify, , dif_generate, dif_generate_copy 00:21:10.616 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:21:10.616 [-l for compress/decompress workloads, name of uncompressed input file 00:21:10.616 [-S for crc32c workload, use this seed value (default 0) 00:21:10.616 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:21:10.616 [-f for fill workload, use this BYTE value (default 255) 00:21:10.616 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:21:10.616 [-y verify result if this switch is on] 00:21:10.616 [-a tasks to allocate per core (default: same value as -q)] 00:21:10.616 Can be used to spread operations across a wider range of memory. 00:21:10.616 ************************************ 00:21:10.616 END TEST accel_negative_buffers 00:21:10.616 ************************************ 00:21:10.616 15:58:13 -- common/autotest_common.sh@643 -- # es=1 00:21:10.616 15:58:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:10.616 15:58:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:10.616 15:58:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:10.616 00:21:10.616 real 0m0.032s 00:21:10.616 user 0m0.018s 00:21:10.616 sys 0m0.014s 00:21:10.616 15:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.616 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:21:10.616 15:58:13 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:21:10.616 15:58:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:21:10.616 15:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.616 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:21:10.616 ************************************ 00:21:10.616 START TEST accel_crc32c 00:21:10.616 ************************************ 00:21:10.616 15:58:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:21:10.616 15:58:13 -- accel/accel.sh@16 -- # local accel_opc 00:21:10.616 15:58:13 -- accel/accel.sh@17 -- # local accel_module 00:21:10.616 15:58:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:21:10.616 15:58:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:21:10.616 15:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:21:10.616 15:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:10.616 15:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:10.616 15:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:10.616 15:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:10.616 15:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:10.616 15:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:21:10.616 15:58:13 -- accel/accel.sh@42 -- # jq -r . 00:21:10.616 [2024-07-22 15:58:13.410330] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:10.616 [2024-07-22 15:58:13.410451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56051 ] 00:21:10.873 [2024-07-22 15:58:13.553632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.873 [2024-07-22 15:58:13.639467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.243 15:58:14 -- accel/accel.sh@18 -- # out=' 00:21:12.243 SPDK Configuration: 00:21:12.243 Core mask: 0x1 00:21:12.243 00:21:12.243 Accel Perf Configuration: 00:21:12.243 Workload Type: crc32c 00:21:12.243 CRC-32C seed: 32 00:21:12.243 Transfer size: 4096 bytes 00:21:12.243 Vector count 1 00:21:12.243 Module: software 00:21:12.243 Queue depth: 32 00:21:12.243 Allocate depth: 32 00:21:12.243 # threads/core: 1 00:21:12.243 Run time: 1 seconds 00:21:12.243 Verify: Yes 00:21:12.243 00:21:12.243 Running for 1 seconds... 00:21:12.243 00:21:12.243 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:12.243 ------------------------------------------------------------------------------------ 00:21:12.243 0,0 405824/s 1585 MiB/s 0 0 00:21:12.243 ==================================================================================== 00:21:12.243 Total 405824/s 1585 MiB/s 0 0' 00:21:12.243 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:21:12.243 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:21:12.243 15:58:14 -- accel/accel.sh@12 -- # build_accel_config 00:21:12.243 15:58:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:12.243 15:58:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:12.243 15:58:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:12.243 15:58:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:12.243 15:58:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:12.243 15:58:14 -- accel/accel.sh@41 -- # local IFS=, 00:21:12.243 15:58:14 -- accel/accel.sh@42 -- # jq -r . 00:21:12.243 [2024-07-22 15:58:14.838807] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:12.243 [2024-07-22 15:58:14.838947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56072 ] 00:21:12.243 [2024-07-22 15:58:14.981875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.243 [2024-07-22 15:58:15.040087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val= 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val= 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val=0x1 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val= 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val= 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val=crc32c 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val=32 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val= 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val=software 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@23 -- # accel_module=software 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val=32 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val=32 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val=1 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val=Yes 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val= 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:12.243 15:58:15 -- accel/accel.sh@21 -- # val= 00:21:12.243 15:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # IFS=: 00:21:12.243 15:58:15 -- accel/accel.sh@20 -- # read -r var val 00:21:13.618 15:58:16 -- accel/accel.sh@21 -- # val= 00:21:13.618 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:21:13.618 15:58:16 -- accel/accel.sh@21 -- # val= 00:21:13.618 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:21:13.618 15:58:16 -- accel/accel.sh@21 -- # val= 00:21:13.618 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:21:13.618 15:58:16 -- accel/accel.sh@21 -- # val= 00:21:13.618 ************************************ 00:21:13.618 END TEST accel_crc32c 00:21:13.618 ************************************ 00:21:13.618 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:21:13.618 15:58:16 -- accel/accel.sh@21 -- # val= 00:21:13.618 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:21:13.618 15:58:16 -- accel/accel.sh@21 -- # val= 00:21:13.618 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:21:13.618 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:21:13.618 15:58:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:13.618 15:58:16 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:21:13.618 15:58:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:13.618 00:21:13.618 real 0m2.832s 00:21:13.618 user 0m2.457s 00:21:13.618 sys 0m0.161s 00:21:13.618 15:58:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.618 15:58:16 -- common/autotest_common.sh@10 -- # set +x 00:21:13.618 15:58:16 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:21:13.618 15:58:16 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:21:13.618 15:58:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:13.618 15:58:16 -- common/autotest_common.sh@10 -- # set +x 00:21:13.618 ************************************ 00:21:13.618 START TEST accel_crc32c_C2 00:21:13.618 ************************************ 00:21:13.618 15:58:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:21:13.618 15:58:16 -- accel/accel.sh@16 -- # local accel_opc 00:21:13.618 15:58:16 -- accel/accel.sh@17 -- # local accel_module 00:21:13.618 15:58:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:21:13.618 15:58:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:21:13.618 15:58:16 -- accel/accel.sh@12 -- # build_accel_config 00:21:13.618 15:58:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:13.618 15:58:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:13.618 15:58:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:13.618 15:58:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:13.618 15:58:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:13.618 15:58:16 -- accel/accel.sh@41 -- # local IFS=, 00:21:13.618 15:58:16 -- accel/accel.sh@42 -- # jq -r . 00:21:13.618 [2024-07-22 15:58:16.282265] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:13.618 [2024-07-22 15:58:16.282385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56101 ] 00:21:13.618 [2024-07-22 15:58:16.417919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.876 [2024-07-22 15:58:16.487730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.810 15:58:17 -- accel/accel.sh@18 -- # out=' 00:21:14.810 SPDK Configuration: 00:21:14.810 Core mask: 0x1 00:21:14.811 00:21:14.811 Accel Perf Configuration: 00:21:14.811 Workload Type: crc32c 00:21:14.811 CRC-32C seed: 0 00:21:14.811 Transfer size: 4096 bytes 00:21:14.811 Vector count 2 00:21:14.811 Module: software 00:21:14.811 Queue depth: 32 00:21:14.811 Allocate depth: 32 00:21:14.811 # threads/core: 1 00:21:14.811 Run time: 1 seconds 00:21:14.811 Verify: Yes 00:21:14.811 00:21:14.811 Running for 1 seconds... 00:21:14.811 00:21:14.811 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:14.811 ------------------------------------------------------------------------------------ 00:21:14.811 0,0 316992/s 2476 MiB/s 0 0 00:21:14.811 ==================================================================================== 00:21:14.811 Total 316992/s 1238 MiB/s 0 0' 00:21:14.811 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:14.811 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:14.811 15:58:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:21:14.811 15:58:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:21:14.811 15:58:17 -- accel/accel.sh@12 -- # build_accel_config 00:21:14.811 15:58:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:14.811 15:58:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:14.811 15:58:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:14.811 15:58:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:14.811 15:58:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:14.811 15:58:17 -- accel/accel.sh@41 -- # local IFS=, 00:21:14.811 15:58:17 -- accel/accel.sh@42 -- # jq -r . 00:21:15.069 [2024-07-22 15:58:17.692380] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:15.069 [2024-07-22 15:58:17.692530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56126 ] 00:21:15.069 [2024-07-22 15:58:17.833725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.069 [2024-07-22 15:58:17.891656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.069 15:58:17 -- accel/accel.sh@21 -- # val= 00:21:15.069 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.069 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.069 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.069 15:58:17 -- accel/accel.sh@21 -- # val= 00:21:15.069 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.069 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.069 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val=0x1 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val= 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val= 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val=crc32c 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val=0 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val= 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val=software 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@23 -- # accel_module=software 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val=32 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val=32 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val=1 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.070 15:58:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:15.070 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.070 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.328 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.328 15:58:17 -- accel/accel.sh@21 -- # val=Yes 00:21:15.328 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.328 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.328 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.328 15:58:17 -- accel/accel.sh@21 -- # val= 00:21:15.328 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.328 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.328 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:15.328 15:58:17 -- accel/accel.sh@21 -- # val= 00:21:15.328 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:21:15.328 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:21:15.328 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:21:16.263 15:58:19 -- accel/accel.sh@21 -- # val= 00:21:16.263 15:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # IFS=: 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # read -r var val 00:21:16.263 15:58:19 -- accel/accel.sh@21 -- # val= 00:21:16.263 15:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # IFS=: 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # read -r var val 00:21:16.263 15:58:19 -- accel/accel.sh@21 -- # val= 00:21:16.263 15:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # IFS=: 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # read -r var val 00:21:16.263 15:58:19 -- accel/accel.sh@21 -- # val= 00:21:16.263 15:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # IFS=: 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # read -r var val 00:21:16.263 15:58:19 -- accel/accel.sh@21 -- # val= 00:21:16.263 15:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # IFS=: 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # read -r var val 00:21:16.263 ************************************ 00:21:16.263 END TEST accel_crc32c_C2 00:21:16.263 ************************************ 00:21:16.263 15:58:19 -- accel/accel.sh@21 -- # val= 00:21:16.263 15:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # IFS=: 00:21:16.263 15:58:19 -- accel/accel.sh@20 -- # read -r var val 00:21:16.263 15:58:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:16.263 15:58:19 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:21:16.263 15:58:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:16.263 00:21:16.263 real 0m2.816s 00:21:16.263 user 0m2.456s 00:21:16.263 sys 0m0.151s 00:21:16.263 15:58:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:16.263 15:58:19 -- common/autotest_common.sh@10 -- # set +x 00:21:16.263 15:58:19 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:21:16.263 15:58:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:21:16.263 15:58:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:16.263 15:58:19 -- common/autotest_common.sh@10 -- # set +x 00:21:16.263 ************************************ 00:21:16.263 START TEST accel_copy 00:21:16.263 ************************************ 00:21:16.263 15:58:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:21:16.263 15:58:19 -- accel/accel.sh@16 -- # local accel_opc 00:21:16.263 15:58:19 -- accel/accel.sh@17 -- # local accel_module 00:21:16.263 15:58:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:21:16.263 15:58:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:21:16.263 15:58:19 -- accel/accel.sh@12 -- # build_accel_config 00:21:16.263 15:58:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:16.263 15:58:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:16.263 15:58:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:16.263 15:58:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:16.263 15:58:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:16.263 15:58:19 -- accel/accel.sh@41 -- # local IFS=, 00:21:16.263 15:58:19 -- accel/accel.sh@42 -- # jq -r . 00:21:16.521 [2024-07-22 15:58:19.141045] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:16.521 [2024-07-22 15:58:19.141139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56155 ] 00:21:16.521 [2024-07-22 15:58:19.275955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.521 [2024-07-22 15:58:19.336175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.896 15:58:20 -- accel/accel.sh@18 -- # out=' 00:21:17.896 SPDK Configuration: 00:21:17.896 Core mask: 0x1 00:21:17.896 00:21:17.896 Accel Perf Configuration: 00:21:17.896 Workload Type: copy 00:21:17.896 Transfer size: 4096 bytes 00:21:17.896 Vector count 1 00:21:17.896 Module: software 00:21:17.896 Queue depth: 32 00:21:17.896 Allocate depth: 32 00:21:17.896 # threads/core: 1 00:21:17.896 Run time: 1 seconds 00:21:17.896 Verify: Yes 00:21:17.896 00:21:17.896 Running for 1 seconds... 00:21:17.896 00:21:17.896 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:17.896 ------------------------------------------------------------------------------------ 00:21:17.896 0,0 290592/s 1135 MiB/s 0 0 00:21:17.896 ==================================================================================== 00:21:17.896 Total 290592/s 1135 MiB/s 0 0' 00:21:17.896 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:17.896 15:58:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:21:17.896 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:17.896 15:58:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:21:17.896 15:58:20 -- accel/accel.sh@12 -- # build_accel_config 00:21:17.896 15:58:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:17.896 15:58:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:17.896 15:58:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:17.896 15:58:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:17.896 15:58:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:17.896 15:58:20 -- accel/accel.sh@41 -- # local IFS=, 00:21:17.896 15:58:20 -- accel/accel.sh@42 -- # jq -r . 00:21:17.896 [2024-07-22 15:58:20.535176] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:17.896 [2024-07-22 15:58:20.535308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56169 ] 00:21:17.896 [2024-07-22 15:58:20.675436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.896 [2024-07-22 15:58:20.741515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val= 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val= 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val=0x1 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val= 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val= 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val=copy 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@24 -- # accel_opc=copy 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val= 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val=software 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@23 -- # accel_module=software 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val=32 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val=32 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val=1 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val=Yes 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val= 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:18.154 15:58:20 -- accel/accel.sh@21 -- # val= 00:21:18.154 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:21:18.154 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:21:19.088 15:58:21 -- accel/accel.sh@21 -- # val= 00:21:19.088 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:21:19.088 15:58:21 -- accel/accel.sh@21 -- # val= 00:21:19.088 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:21:19.088 15:58:21 -- accel/accel.sh@21 -- # val= 00:21:19.088 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:21:19.088 15:58:21 -- accel/accel.sh@21 -- # val= 00:21:19.088 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:21:19.088 15:58:21 -- accel/accel.sh@21 -- # val= 00:21:19.088 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:21:19.088 15:58:21 -- accel/accel.sh@21 -- # val= 00:21:19.088 ************************************ 00:21:19.088 END TEST accel_copy 00:21:19.088 ************************************ 00:21:19.088 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:21:19.088 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:21:19.088 15:58:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:19.088 15:58:21 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:21:19.088 15:58:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:19.088 00:21:19.088 real 0m2.794s 00:21:19.088 user 0m2.434s 00:21:19.088 sys 0m0.154s 00:21:19.088 15:58:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:19.088 15:58:21 -- common/autotest_common.sh@10 -- # set +x 00:21:19.088 15:58:21 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:19.088 15:58:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:21:19.088 15:58:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:19.088 15:58:21 -- common/autotest_common.sh@10 -- # set +x 00:21:19.088 ************************************ 00:21:19.088 START TEST accel_fill 00:21:19.088 ************************************ 00:21:19.088 15:58:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:19.088 15:58:21 -- accel/accel.sh@16 -- # local accel_opc 00:21:19.088 15:58:21 -- accel/accel.sh@17 -- # local accel_module 00:21:19.347 15:58:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:19.347 15:58:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:19.347 15:58:21 -- accel/accel.sh@12 -- # build_accel_config 00:21:19.347 15:58:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:19.347 15:58:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:19.347 15:58:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:19.347 15:58:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:19.347 15:58:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:19.347 15:58:21 -- accel/accel.sh@41 -- # local IFS=, 00:21:19.347 15:58:21 -- accel/accel.sh@42 -- # jq -r . 00:21:19.347 [2024-07-22 15:58:21.974594] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:19.347 [2024-07-22 15:58:21.974730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56209 ] 00:21:19.347 [2024-07-22 15:58:22.113794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.347 [2024-07-22 15:58:22.200481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.753 15:58:23 -- accel/accel.sh@18 -- # out=' 00:21:20.753 SPDK Configuration: 00:21:20.753 Core mask: 0x1 00:21:20.753 00:21:20.753 Accel Perf Configuration: 00:21:20.753 Workload Type: fill 00:21:20.753 Fill pattern: 0x80 00:21:20.753 Transfer size: 4096 bytes 00:21:20.753 Vector count 1 00:21:20.753 Module: software 00:21:20.754 Queue depth: 64 00:21:20.754 Allocate depth: 64 00:21:20.754 # threads/core: 1 00:21:20.754 Run time: 1 seconds 00:21:20.754 Verify: Yes 00:21:20.754 00:21:20.754 Running for 1 seconds... 00:21:20.754 00:21:20.754 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:20.754 ------------------------------------------------------------------------------------ 00:21:20.754 0,0 433984/s 1695 MiB/s 0 0 00:21:20.754 ==================================================================================== 00:21:20.754 Total 433984/s 1695 MiB/s 0 0' 00:21:20.754 15:58:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:20.754 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:20.754 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:20.754 15:58:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:20.754 15:58:23 -- accel/accel.sh@12 -- # build_accel_config 00:21:20.754 15:58:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:20.754 15:58:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:20.754 15:58:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:20.754 15:58:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:20.754 15:58:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:20.754 15:58:23 -- accel/accel.sh@41 -- # local IFS=, 00:21:20.754 15:58:23 -- accel/accel.sh@42 -- # jq -r . 00:21:20.754 [2024-07-22 15:58:23.398402] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:20.754 [2024-07-22 15:58:23.398520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56223 ] 00:21:20.754 [2024-07-22 15:58:23.531597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.754 [2024-07-22 15:58:23.592035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val= 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val= 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val=0x1 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val= 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val= 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val=fill 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@24 -- # accel_opc=fill 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val=0x80 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val= 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val=software 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@23 -- # accel_module=software 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val=64 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val=64 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val=1 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val=Yes 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val= 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.012 15:58:23 -- accel/accel.sh@21 -- # val= 00:21:21.012 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:21:21.012 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:21:21.946 15:58:24 -- accel/accel.sh@21 -- # val= 00:21:21.946 15:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # IFS=: 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # read -r var val 00:21:21.946 15:58:24 -- accel/accel.sh@21 -- # val= 00:21:21.946 15:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # IFS=: 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # read -r var val 00:21:21.946 15:58:24 -- accel/accel.sh@21 -- # val= 00:21:21.946 15:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # IFS=: 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # read -r var val 00:21:21.946 15:58:24 -- accel/accel.sh@21 -- # val= 00:21:21.946 15:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # IFS=: 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # read -r var val 00:21:21.946 15:58:24 -- accel/accel.sh@21 -- # val= 00:21:21.946 15:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # IFS=: 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # read -r var val 00:21:21.946 15:58:24 -- accel/accel.sh@21 -- # val= 00:21:21.946 15:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # IFS=: 00:21:21.946 15:58:24 -- accel/accel.sh@20 -- # read -r var val 00:21:21.946 15:58:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:21.946 15:58:24 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:21:21.946 15:58:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:21.946 00:21:21.946 real 0m2.816s 00:21:21.946 user 0m2.454s 00:21:21.946 sys 0m0.155s 00:21:21.946 15:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.946 15:58:24 -- common/autotest_common.sh@10 -- # set +x 00:21:21.946 ************************************ 00:21:21.946 END TEST accel_fill 00:21:21.946 ************************************ 00:21:21.946 15:58:24 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:21:21.946 15:58:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:21:21.946 15:58:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:21.946 15:58:24 -- common/autotest_common.sh@10 -- # set +x 00:21:21.946 ************************************ 00:21:21.946 START TEST accel_copy_crc32c 00:21:21.946 ************************************ 00:21:21.946 15:58:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:21:21.946 15:58:24 -- accel/accel.sh@16 -- # local accel_opc 00:21:21.946 15:58:24 -- accel/accel.sh@17 -- # local accel_module 00:21:21.946 15:58:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:21:21.946 15:58:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:21:21.946 15:58:24 -- accel/accel.sh@12 -- # build_accel_config 00:21:22.204 15:58:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:22.204 15:58:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:22.204 15:58:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:22.204 15:58:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:22.204 15:58:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:22.204 15:58:24 -- accel/accel.sh@41 -- # local IFS=, 00:21:22.204 15:58:24 -- accel/accel.sh@42 -- # jq -r . 00:21:22.204 [2024-07-22 15:58:24.831134] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:22.204 [2024-07-22 15:58:24.831263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56258 ] 00:21:22.204 [2024-07-22 15:58:24.968372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.204 [2024-07-22 15:58:25.027829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.579 15:58:26 -- accel/accel.sh@18 -- # out=' 00:21:23.579 SPDK Configuration: 00:21:23.579 Core mask: 0x1 00:21:23.579 00:21:23.579 Accel Perf Configuration: 00:21:23.579 Workload Type: copy_crc32c 00:21:23.579 CRC-32C seed: 0 00:21:23.579 Vector size: 4096 bytes 00:21:23.579 Transfer size: 4096 bytes 00:21:23.579 Vector count 1 00:21:23.579 Module: software 00:21:23.579 Queue depth: 32 00:21:23.579 Allocate depth: 32 00:21:23.579 # threads/core: 1 00:21:23.579 Run time: 1 seconds 00:21:23.579 Verify: Yes 00:21:23.579 00:21:23.579 Running for 1 seconds... 00:21:23.579 00:21:23.579 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:23.579 ------------------------------------------------------------------------------------ 00:21:23.579 0,0 235328/s 919 MiB/s 0 0 00:21:23.579 ==================================================================================== 00:21:23.579 Total 235328/s 919 MiB/s 0 0' 00:21:23.579 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.579 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.579 15:58:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:21:23.579 15:58:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:21:23.579 15:58:26 -- accel/accel.sh@12 -- # build_accel_config 00:21:23.579 15:58:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:23.579 15:58:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:23.579 15:58:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:23.579 15:58:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:23.579 15:58:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:23.579 15:58:26 -- accel/accel.sh@41 -- # local IFS=, 00:21:23.579 15:58:26 -- accel/accel.sh@42 -- # jq -r . 00:21:23.579 [2024-07-22 15:58:26.234761] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:23.579 [2024-07-22 15:58:26.234881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56277 ] 00:21:23.579 [2024-07-22 15:58:26.377723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.579 [2024-07-22 15:58:26.435398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val= 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val= 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val=0x1 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val= 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val= 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val=copy_crc32c 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val=0 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val= 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val=software 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@23 -- # accel_module=software 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val=32 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val=32 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val=1 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val=Yes 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val= 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.838 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:23.838 15:58:26 -- accel/accel.sh@21 -- # val= 00:21:23.838 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:21:23.839 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:21:23.839 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:21:24.772 15:58:27 -- accel/accel.sh@21 -- # val= 00:21:24.772 15:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:21:24.772 15:58:27 -- accel/accel.sh@20 -- # IFS=: 00:21:24.772 15:58:27 -- accel/accel.sh@20 -- # read -r var val 00:21:24.772 15:58:27 -- accel/accel.sh@21 -- # val= 00:21:24.772 15:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:21:24.772 15:58:27 -- accel/accel.sh@20 -- # IFS=: 00:21:24.772 15:58:27 -- accel/accel.sh@20 -- # read -r var val 00:21:24.772 15:58:27 -- accel/accel.sh@21 -- # val= 00:21:24.772 15:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:21:24.772 15:58:27 -- accel/accel.sh@20 -- # IFS=: 00:21:24.772 15:58:27 -- accel/accel.sh@20 -- # read -r var val 00:21:24.772 15:58:27 -- accel/accel.sh@21 -- # val= 00:21:24.772 15:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:21:24.772 15:58:27 -- accel/accel.sh@20 -- # IFS=: 00:21:24.772 15:58:27 -- accel/accel.sh@20 -- # read -r var val 00:21:24.772 15:58:27 -- accel/accel.sh@21 -- # val= 00:21:24.773 15:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:21:24.773 15:58:27 -- accel/accel.sh@20 -- # IFS=: 00:21:24.773 15:58:27 -- accel/accel.sh@20 -- # read -r var val 00:21:24.773 15:58:27 -- accel/accel.sh@21 -- # val= 00:21:24.773 ************************************ 00:21:24.773 END TEST accel_copy_crc32c 00:21:24.773 ************************************ 00:21:24.773 15:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:21:24.773 15:58:27 -- accel/accel.sh@20 -- # IFS=: 00:21:24.773 15:58:27 -- accel/accel.sh@20 -- # read -r var val 00:21:24.773 15:58:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:24.773 15:58:27 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:21:24.773 15:58:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:24.773 00:21:24.773 real 0m2.802s 00:21:24.773 user 0m2.448s 00:21:24.773 sys 0m0.146s 00:21:24.773 15:58:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:24.773 15:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:25.031 15:58:27 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:21:25.031 15:58:27 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:21:25.031 15:58:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:25.031 15:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:25.031 ************************************ 00:21:25.031 START TEST accel_copy_crc32c_C2 00:21:25.031 ************************************ 00:21:25.031 15:58:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:21:25.031 15:58:27 -- accel/accel.sh@16 -- # local accel_opc 00:21:25.031 15:58:27 -- accel/accel.sh@17 -- # local accel_module 00:21:25.031 15:58:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:21:25.031 15:58:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:21:25.031 15:58:27 -- accel/accel.sh@12 -- # build_accel_config 00:21:25.031 15:58:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:25.031 15:58:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:25.031 15:58:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:25.031 15:58:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:25.031 15:58:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:25.031 15:58:27 -- accel/accel.sh@41 -- # local IFS=, 00:21:25.032 15:58:27 -- accel/accel.sh@42 -- # jq -r . 00:21:25.032 [2024-07-22 15:58:27.676749] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:25.032 [2024-07-22 15:58:27.676893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56306 ] 00:21:25.032 [2024-07-22 15:58:27.821086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.032 [2024-07-22 15:58:27.890215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.407 15:58:29 -- accel/accel.sh@18 -- # out=' 00:21:26.407 SPDK Configuration: 00:21:26.407 Core mask: 0x1 00:21:26.407 00:21:26.407 Accel Perf Configuration: 00:21:26.407 Workload Type: copy_crc32c 00:21:26.407 CRC-32C seed: 0 00:21:26.407 Vector size: 4096 bytes 00:21:26.407 Transfer size: 8192 bytes 00:21:26.407 Vector count 2 00:21:26.407 Module: software 00:21:26.407 Queue depth: 32 00:21:26.407 Allocate depth: 32 00:21:26.407 # threads/core: 1 00:21:26.407 Run time: 1 seconds 00:21:26.407 Verify: Yes 00:21:26.407 00:21:26.407 Running for 1 seconds... 00:21:26.407 00:21:26.407 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:26.407 ------------------------------------------------------------------------------------ 00:21:26.407 0,0 170560/s 1332 MiB/s 0 0 00:21:26.407 ==================================================================================== 00:21:26.407 Total 170560/s 666 MiB/s 0 0' 00:21:26.407 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.407 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.407 15:58:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:21:26.407 15:58:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:21:26.407 15:58:29 -- accel/accel.sh@12 -- # build_accel_config 00:21:26.407 15:58:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:26.407 15:58:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:26.407 15:58:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:26.407 15:58:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:26.407 15:58:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:26.407 15:58:29 -- accel/accel.sh@41 -- # local IFS=, 00:21:26.407 15:58:29 -- accel/accel.sh@42 -- # jq -r . 00:21:26.407 [2024-07-22 15:58:29.081346] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:26.407 [2024-07-22 15:58:29.081474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56326 ] 00:21:26.407 [2024-07-22 15:58:29.220898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.665 [2024-07-22 15:58:29.280901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val= 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val= 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val=0x1 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val= 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val= 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val=copy_crc32c 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val=0 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val='8192 bytes' 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val= 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val=software 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@23 -- # accel_module=software 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val=32 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val=32 00:21:26.665 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.665 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.665 15:58:29 -- accel/accel.sh@21 -- # val=1 00:21:26.666 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.666 15:58:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:26.666 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.666 15:58:29 -- accel/accel.sh@21 -- # val=Yes 00:21:26.666 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.666 15:58:29 -- accel/accel.sh@21 -- # val= 00:21:26.666 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:26.666 15:58:29 -- accel/accel.sh@21 -- # val= 00:21:26.666 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:21:26.666 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:21:27.664 15:58:30 -- accel/accel.sh@21 -- # val= 00:21:27.664 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:21:27.664 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:21:27.664 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:21:27.664 15:58:30 -- accel/accel.sh@21 -- # val= 00:21:27.664 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:21:27.664 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:21:27.664 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:21:27.664 15:58:30 -- accel/accel.sh@21 -- # val= 00:21:27.664 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:21:27.664 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:21:27.665 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:21:27.665 15:58:30 -- accel/accel.sh@21 -- # val= 00:21:27.665 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:21:27.665 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:21:27.665 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:21:27.665 15:58:30 -- accel/accel.sh@21 -- # val= 00:21:27.665 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:21:27.665 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:21:27.665 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:21:27.665 15:58:30 -- accel/accel.sh@21 -- # val= 00:21:27.665 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:21:27.665 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:21:27.665 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:21:27.665 ************************************ 00:21:27.665 END TEST accel_copy_crc32c_C2 00:21:27.665 ************************************ 00:21:27.665 15:58:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:27.665 15:58:30 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:21:27.665 15:58:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:27.665 00:21:27.665 real 0m2.823s 00:21:27.665 user 0m2.465s 00:21:27.665 sys 0m0.151s 00:21:27.665 15:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.665 15:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:27.665 15:58:30 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:21:27.665 15:58:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:21:27.665 15:58:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:27.665 15:58:30 -- common/autotest_common.sh@10 -- # set +x 00:21:27.665 ************************************ 00:21:27.665 START TEST accel_dualcast 00:21:27.665 ************************************ 00:21:27.665 15:58:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:21:27.665 15:58:30 -- accel/accel.sh@16 -- # local accel_opc 00:21:27.665 15:58:30 -- accel/accel.sh@17 -- # local accel_module 00:21:27.665 15:58:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:21:27.665 15:58:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:21:27.665 15:58:30 -- accel/accel.sh@12 -- # build_accel_config 00:21:27.665 15:58:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:27.665 15:58:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:27.665 15:58:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:27.665 15:58:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:27.665 15:58:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:27.665 15:58:30 -- accel/accel.sh@41 -- # local IFS=, 00:21:27.665 15:58:30 -- accel/accel.sh@42 -- # jq -r . 00:21:27.923 [2024-07-22 15:58:30.539003] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:27.923 [2024-07-22 15:58:30.539128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56360 ] 00:21:27.923 [2024-07-22 15:58:30.677549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.923 [2024-07-22 15:58:30.761103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.297 15:58:31 -- accel/accel.sh@18 -- # out=' 00:21:29.297 SPDK Configuration: 00:21:29.297 Core mask: 0x1 00:21:29.297 00:21:29.297 Accel Perf Configuration: 00:21:29.297 Workload Type: dualcast 00:21:29.297 Transfer size: 4096 bytes 00:21:29.297 Vector count 1 00:21:29.297 Module: software 00:21:29.297 Queue depth: 32 00:21:29.297 Allocate depth: 32 00:21:29.297 # threads/core: 1 00:21:29.297 Run time: 1 seconds 00:21:29.297 Verify: Yes 00:21:29.297 00:21:29.297 Running for 1 seconds... 00:21:29.297 00:21:29.297 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:29.297 ------------------------------------------------------------------------------------ 00:21:29.297 0,0 322208/s 1258 MiB/s 0 0 00:21:29.297 ==================================================================================== 00:21:29.297 Total 322208/s 1258 MiB/s 0 0' 00:21:29.297 15:58:31 -- accel/accel.sh@20 -- # IFS=: 00:21:29.297 15:58:31 -- accel/accel.sh@20 -- # read -r var val 00:21:29.297 15:58:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:21:29.297 15:58:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:21:29.297 15:58:31 -- accel/accel.sh@12 -- # build_accel_config 00:21:29.297 15:58:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:29.297 15:58:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:29.297 15:58:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:29.297 15:58:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:29.297 15:58:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:29.297 15:58:31 -- accel/accel.sh@41 -- # local IFS=, 00:21:29.297 15:58:31 -- accel/accel.sh@42 -- # jq -r . 00:21:29.297 [2024-07-22 15:58:31.966913] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:29.297 [2024-07-22 15:58:31.967044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56380 ] 00:21:29.297 [2024-07-22 15:58:32.107731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.555 [2024-07-22 15:58:32.166262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val= 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val= 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val=0x1 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val= 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val= 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val=dualcast 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val= 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val=software 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@23 -- # accel_module=software 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val=32 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val=32 00:21:29.555 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.555 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.555 15:58:32 -- accel/accel.sh@21 -- # val=1 00:21:29.556 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.556 15:58:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:29.556 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.556 15:58:32 -- accel/accel.sh@21 -- # val=Yes 00:21:29.556 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.556 15:58:32 -- accel/accel.sh@21 -- # val= 00:21:29.556 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:29.556 15:58:32 -- accel/accel.sh@21 -- # val= 00:21:29.556 15:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # IFS=: 00:21:29.556 15:58:32 -- accel/accel.sh@20 -- # read -r var val 00:21:30.495 15:58:33 -- accel/accel.sh@21 -- # val= 00:21:30.495 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:21:30.495 15:58:33 -- accel/accel.sh@21 -- # val= 00:21:30.495 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:21:30.495 15:58:33 -- accel/accel.sh@21 -- # val= 00:21:30.495 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:21:30.495 15:58:33 -- accel/accel.sh@21 -- # val= 00:21:30.495 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:21:30.495 15:58:33 -- accel/accel.sh@21 -- # val= 00:21:30.495 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:21:30.495 15:58:33 -- accel/accel.sh@21 -- # val= 00:21:30.495 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:21:30.495 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:21:30.495 15:58:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:30.495 15:58:33 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:21:30.495 15:58:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:30.495 00:21:30.495 real 0m2.824s 00:21:30.495 user 0m2.449s 00:21:30.495 sys 0m0.165s 00:21:30.495 15:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.495 15:58:33 -- common/autotest_common.sh@10 -- # set +x 00:21:30.495 ************************************ 00:21:30.495 END TEST accel_dualcast 00:21:30.495 ************************************ 00:21:30.771 15:58:33 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:21:30.771 15:58:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:21:30.771 15:58:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:30.771 15:58:33 -- common/autotest_common.sh@10 -- # set +x 00:21:30.771 ************************************ 00:21:30.771 START TEST accel_compare 00:21:30.771 ************************************ 00:21:30.771 15:58:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:21:30.771 15:58:33 -- accel/accel.sh@16 -- # local accel_opc 00:21:30.771 15:58:33 -- accel/accel.sh@17 -- # local accel_module 00:21:30.771 15:58:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:21:30.771 15:58:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:21:30.771 15:58:33 -- accel/accel.sh@12 -- # build_accel_config 00:21:30.771 15:58:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:30.771 15:58:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:30.771 15:58:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:30.771 15:58:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:30.771 15:58:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:30.771 15:58:33 -- accel/accel.sh@41 -- # local IFS=, 00:21:30.771 15:58:33 -- accel/accel.sh@42 -- # jq -r . 00:21:30.771 [2024-07-22 15:58:33.405218] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:30.771 [2024-07-22 15:58:33.405318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56413 ] 00:21:30.771 [2024-07-22 15:58:33.546148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.771 [2024-07-22 15:58:33.618954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.147 15:58:34 -- accel/accel.sh@18 -- # out=' 00:21:32.147 SPDK Configuration: 00:21:32.147 Core mask: 0x1 00:21:32.147 00:21:32.147 Accel Perf Configuration: 00:21:32.147 Workload Type: compare 00:21:32.147 Transfer size: 4096 bytes 00:21:32.147 Vector count 1 00:21:32.147 Module: software 00:21:32.147 Queue depth: 32 00:21:32.147 Allocate depth: 32 00:21:32.147 # threads/core: 1 00:21:32.147 Run time: 1 seconds 00:21:32.147 Verify: Yes 00:21:32.147 00:21:32.147 Running for 1 seconds... 00:21:32.147 00:21:32.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:32.147 ------------------------------------------------------------------------------------ 00:21:32.147 0,0 426816/s 1667 MiB/s 0 0 00:21:32.147 ==================================================================================== 00:21:32.147 Total 426816/s 1667 MiB/s 0 0' 00:21:32.147 15:58:34 -- accel/accel.sh@20 -- # IFS=: 00:21:32.147 15:58:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:21:32.147 15:58:34 -- accel/accel.sh@20 -- # read -r var val 00:21:32.147 15:58:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:21:32.147 15:58:34 -- accel/accel.sh@12 -- # build_accel_config 00:21:32.147 15:58:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:32.147 15:58:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:32.147 15:58:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:32.147 15:58:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:32.147 15:58:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:32.147 15:58:34 -- accel/accel.sh@41 -- # local IFS=, 00:21:32.147 15:58:34 -- accel/accel.sh@42 -- # jq -r . 00:21:32.147 [2024-07-22 15:58:34.806549] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:32.147 [2024-07-22 15:58:34.806649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56428 ] 00:21:32.147 [2024-07-22 15:58:34.947717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.147 [2024-07-22 15:58:35.005038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val= 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val= 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val=0x1 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val= 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val= 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val=compare 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@24 -- # accel_opc=compare 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val= 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val=software 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@23 -- # accel_module=software 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val=32 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val=32 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val=1 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val=Yes 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val= 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:32.406 15:58:35 -- accel/accel.sh@21 -- # val= 00:21:32.406 15:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:21:32.406 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:21:33.342 15:58:36 -- accel/accel.sh@21 -- # val= 00:21:33.342 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:21:33.342 15:58:36 -- accel/accel.sh@21 -- # val= 00:21:33.342 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:21:33.342 15:58:36 -- accel/accel.sh@21 -- # val= 00:21:33.342 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:21:33.342 15:58:36 -- accel/accel.sh@21 -- # val= 00:21:33.342 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:21:33.342 15:58:36 -- accel/accel.sh@21 -- # val= 00:21:33.342 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:21:33.342 15:58:36 -- accel/accel.sh@21 -- # val= 00:21:33.342 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:21:33.342 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:21:33.342 15:58:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:33.342 15:58:36 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:21:33.342 15:58:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:33.342 00:21:33.342 real 0m2.792s 00:21:33.342 user 0m2.435s 00:21:33.342 sys 0m0.151s 00:21:33.342 ************************************ 00:21:33.342 END TEST accel_compare 00:21:33.342 ************************************ 00:21:33.343 15:58:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.343 15:58:36 -- common/autotest_common.sh@10 -- # set +x 00:21:33.603 15:58:36 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:21:33.603 15:58:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:21:33.603 15:58:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:33.603 15:58:36 -- common/autotest_common.sh@10 -- # set +x 00:21:33.603 ************************************ 00:21:33.603 START TEST accel_xor 00:21:33.603 ************************************ 00:21:33.603 15:58:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:21:33.603 15:58:36 -- accel/accel.sh@16 -- # local accel_opc 00:21:33.603 15:58:36 -- accel/accel.sh@17 -- # local accel_module 00:21:33.603 15:58:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:21:33.603 15:58:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:21:33.603 15:58:36 -- accel/accel.sh@12 -- # build_accel_config 00:21:33.604 15:58:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:33.604 15:58:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:33.604 15:58:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:33.604 15:58:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:33.604 15:58:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:33.604 15:58:36 -- accel/accel.sh@41 -- # local IFS=, 00:21:33.604 15:58:36 -- accel/accel.sh@42 -- # jq -r . 00:21:33.604 [2024-07-22 15:58:36.239938] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:33.604 [2024-07-22 15:58:36.240028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56463 ] 00:21:33.604 [2024-07-22 15:58:36.377968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.604 [2024-07-22 15:58:36.444918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.978 15:58:37 -- accel/accel.sh@18 -- # out=' 00:21:34.978 SPDK Configuration: 00:21:34.978 Core mask: 0x1 00:21:34.978 00:21:34.978 Accel Perf Configuration: 00:21:34.978 Workload Type: xor 00:21:34.978 Source buffers: 2 00:21:34.978 Transfer size: 4096 bytes 00:21:34.978 Vector count 1 00:21:34.978 Module: software 00:21:34.978 Queue depth: 32 00:21:34.978 Allocate depth: 32 00:21:34.978 # threads/core: 1 00:21:34.978 Run time: 1 seconds 00:21:34.978 Verify: Yes 00:21:34.978 00:21:34.978 Running for 1 seconds... 00:21:34.978 00:21:34.978 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:34.978 ------------------------------------------------------------------------------------ 00:21:34.978 0,0 244288/s 954 MiB/s 0 0 00:21:34.978 ==================================================================================== 00:21:34.978 Total 244288/s 954 MiB/s 0 0' 00:21:34.978 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:34.978 15:58:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:21:34.978 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:34.978 15:58:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:21:34.978 15:58:37 -- accel/accel.sh@12 -- # build_accel_config 00:21:34.978 15:58:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:34.978 15:58:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:34.978 15:58:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:34.978 15:58:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:34.978 15:58:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:34.978 15:58:37 -- accel/accel.sh@41 -- # local IFS=, 00:21:34.978 15:58:37 -- accel/accel.sh@42 -- # jq -r . 00:21:34.978 [2024-07-22 15:58:37.642958] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:34.978 [2024-07-22 15:58:37.643628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56482 ] 00:21:34.978 [2024-07-22 15:58:37.784252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.237 [2024-07-22 15:58:37.850799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val= 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val= 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val=0x1 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val= 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val= 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val=xor 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@24 -- # accel_opc=xor 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val=2 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val= 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val=software 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@23 -- # accel_module=software 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val=32 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val=32 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val=1 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val=Yes 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val= 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:35.237 15:58:37 -- accel/accel.sh@21 -- # val= 00:21:35.237 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:21:35.237 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:21:36.172 15:58:39 -- accel/accel.sh@21 -- # val= 00:21:36.172 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:21:36.172 15:58:39 -- accel/accel.sh@21 -- # val= 00:21:36.172 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:21:36.172 15:58:39 -- accel/accel.sh@21 -- # val= 00:21:36.172 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:21:36.172 15:58:39 -- accel/accel.sh@21 -- # val= 00:21:36.172 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:21:36.172 15:58:39 -- accel/accel.sh@21 -- # val= 00:21:36.172 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:21:36.172 15:58:39 -- accel/accel.sh@21 -- # val= 00:21:36.172 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:21:36.172 ************************************ 00:21:36.172 END TEST accel_xor 00:21:36.172 ************************************ 00:21:36.172 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:21:36.172 15:58:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:36.172 15:58:39 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:21:36.172 15:58:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:36.172 00:21:36.172 real 0m2.808s 00:21:36.172 user 0m2.453s 00:21:36.172 sys 0m0.146s 00:21:36.172 15:58:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.172 15:58:39 -- common/autotest_common.sh@10 -- # set +x 00:21:36.430 15:58:39 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:21:36.430 15:58:39 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:21:36.430 15:58:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:36.430 15:58:39 -- common/autotest_common.sh@10 -- # set +x 00:21:36.430 ************************************ 00:21:36.430 START TEST accel_xor 00:21:36.430 ************************************ 00:21:36.430 15:58:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:21:36.430 15:58:39 -- accel/accel.sh@16 -- # local accel_opc 00:21:36.430 15:58:39 -- accel/accel.sh@17 -- # local accel_module 00:21:36.430 15:58:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:21:36.430 15:58:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:21:36.430 15:58:39 -- accel/accel.sh@12 -- # build_accel_config 00:21:36.430 15:58:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:36.430 15:58:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:36.430 15:58:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:36.430 15:58:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:36.430 15:58:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:36.431 15:58:39 -- accel/accel.sh@41 -- # local IFS=, 00:21:36.431 15:58:39 -- accel/accel.sh@42 -- # jq -r . 00:21:36.431 [2024-07-22 15:58:39.090636] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:36.431 [2024-07-22 15:58:39.090713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56517 ] 00:21:36.431 [2024-07-22 15:58:39.227144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.688 [2024-07-22 15:58:39.294250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.652 15:58:40 -- accel/accel.sh@18 -- # out=' 00:21:37.652 SPDK Configuration: 00:21:37.652 Core mask: 0x1 00:21:37.652 00:21:37.652 Accel Perf Configuration: 00:21:37.652 Workload Type: xor 00:21:37.652 Source buffers: 3 00:21:37.652 Transfer size: 4096 bytes 00:21:37.652 Vector count 1 00:21:37.652 Module: software 00:21:37.652 Queue depth: 32 00:21:37.652 Allocate depth: 32 00:21:37.652 # threads/core: 1 00:21:37.652 Run time: 1 seconds 00:21:37.652 Verify: Yes 00:21:37.652 00:21:37.652 Running for 1 seconds... 00:21:37.652 00:21:37.652 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:37.652 ------------------------------------------------------------------------------------ 00:21:37.652 0,0 221792/s 866 MiB/s 0 0 00:21:37.652 ==================================================================================== 00:21:37.652 Total 221792/s 866 MiB/s 0 0' 00:21:37.652 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.652 15:58:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:21:37.652 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.652 15:58:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:21:37.652 15:58:40 -- accel/accel.sh@12 -- # build_accel_config 00:21:37.652 15:58:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:37.652 15:58:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:37.652 15:58:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:37.652 15:58:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:37.652 15:58:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:37.652 15:58:40 -- accel/accel.sh@41 -- # local IFS=, 00:21:37.652 15:58:40 -- accel/accel.sh@42 -- # jq -r . 00:21:37.652 [2024-07-22 15:58:40.483849] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:37.652 [2024-07-22 15:58:40.483952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56531 ] 00:21:37.910 [2024-07-22 15:58:40.616116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.910 [2024-07-22 15:58:40.684698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.910 15:58:40 -- accel/accel.sh@21 -- # val= 00:21:37.910 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.910 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.910 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.910 15:58:40 -- accel/accel.sh@21 -- # val= 00:21:37.910 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val=0x1 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val= 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val= 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val=xor 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@24 -- # accel_opc=xor 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val=3 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val= 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val=software 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@23 -- # accel_module=software 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val=32 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val=32 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val=1 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val=Yes 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val= 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:37.911 15:58:40 -- accel/accel.sh@21 -- # val= 00:21:37.911 15:58:40 -- accel/accel.sh@22 -- # case "$var" in 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # IFS=: 00:21:37.911 15:58:40 -- accel/accel.sh@20 -- # read -r var val 00:21:39.284 15:58:41 -- accel/accel.sh@21 -- # val= 00:21:39.284 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:21:39.284 15:58:41 -- accel/accel.sh@21 -- # val= 00:21:39.284 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:21:39.284 15:58:41 -- accel/accel.sh@21 -- # val= 00:21:39.284 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:21:39.284 15:58:41 -- accel/accel.sh@21 -- # val= 00:21:39.284 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:21:39.284 15:58:41 -- accel/accel.sh@21 -- # val= 00:21:39.284 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:21:39.284 15:58:41 -- accel/accel.sh@21 -- # val= 00:21:39.284 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:21:39.284 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:21:39.284 15:58:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:39.284 15:58:41 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:21:39.284 15:58:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:39.284 00:21:39.284 real 0m2.786s 00:21:39.284 user 0m2.432s 00:21:39.284 sys 0m0.145s 00:21:39.284 15:58:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.284 ************************************ 00:21:39.284 END TEST accel_xor 00:21:39.284 ************************************ 00:21:39.284 15:58:41 -- common/autotest_common.sh@10 -- # set +x 00:21:39.284 15:58:41 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:21:39.284 15:58:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:39.284 15:58:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:39.284 15:58:41 -- common/autotest_common.sh@10 -- # set +x 00:21:39.284 ************************************ 00:21:39.284 START TEST accel_dif_verify 00:21:39.284 ************************************ 00:21:39.284 15:58:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:21:39.284 15:58:41 -- accel/accel.sh@16 -- # local accel_opc 00:21:39.284 15:58:41 -- accel/accel.sh@17 -- # local accel_module 00:21:39.284 15:58:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:21:39.284 15:58:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:21:39.284 15:58:41 -- accel/accel.sh@12 -- # build_accel_config 00:21:39.284 15:58:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:39.284 15:58:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:39.284 15:58:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:39.284 15:58:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:39.284 15:58:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:39.284 15:58:41 -- accel/accel.sh@41 -- # local IFS=, 00:21:39.284 15:58:41 -- accel/accel.sh@42 -- # jq -r . 00:21:39.284 [2024-07-22 15:58:41.929734] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:39.284 [2024-07-22 15:58:41.929956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56565 ] 00:21:39.284 [2024-07-22 15:58:42.060271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.284 [2024-07-22 15:58:42.117222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.659 15:58:43 -- accel/accel.sh@18 -- # out=' 00:21:40.659 SPDK Configuration: 00:21:40.659 Core mask: 0x1 00:21:40.659 00:21:40.659 Accel Perf Configuration: 00:21:40.659 Workload Type: dif_verify 00:21:40.659 Vector size: 4096 bytes 00:21:40.659 Transfer size: 4096 bytes 00:21:40.659 Block size: 512 bytes 00:21:40.659 Metadata size: 8 bytes 00:21:40.659 Vector count 1 00:21:40.659 Module: software 00:21:40.659 Queue depth: 32 00:21:40.659 Allocate depth: 32 00:21:40.659 # threads/core: 1 00:21:40.659 Run time: 1 seconds 00:21:40.659 Verify: No 00:21:40.659 00:21:40.659 Running for 1 seconds... 00:21:40.659 00:21:40.659 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:40.659 ------------------------------------------------------------------------------------ 00:21:40.659 0,0 93856/s 372 MiB/s 0 0 00:21:40.659 ==================================================================================== 00:21:40.659 Total 93856/s 366 MiB/s 0 0' 00:21:40.659 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.659 15:58:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:21:40.659 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.659 15:58:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:21:40.659 15:58:43 -- accel/accel.sh@12 -- # build_accel_config 00:21:40.659 15:58:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:40.659 15:58:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:40.659 15:58:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:40.659 15:58:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:40.659 15:58:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:40.659 15:58:43 -- accel/accel.sh@41 -- # local IFS=, 00:21:40.659 15:58:43 -- accel/accel.sh@42 -- # jq -r . 00:21:40.659 [2024-07-22 15:58:43.310092] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:40.659 [2024-07-22 15:58:43.310206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56585 ] 00:21:40.659 [2024-07-22 15:58:43.450988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.918 [2024-07-22 15:58:43.525600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val= 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val= 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val=0x1 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val= 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val= 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val=dif_verify 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val='512 bytes' 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val='8 bytes' 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val= 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val=software 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@23 -- # accel_module=software 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val=32 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val=32 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val=1 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val=No 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val= 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:40.918 15:58:43 -- accel/accel.sh@21 -- # val= 00:21:40.918 15:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:21:40.918 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:21:41.852 15:58:44 -- accel/accel.sh@21 -- # val= 00:21:41.852 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:21:41.852 15:58:44 -- accel/accel.sh@21 -- # val= 00:21:41.852 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:21:41.852 15:58:44 -- accel/accel.sh@21 -- # val= 00:21:41.852 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:21:41.852 15:58:44 -- accel/accel.sh@21 -- # val= 00:21:41.852 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:21:41.852 15:58:44 -- accel/accel.sh@21 -- # val= 00:21:41.852 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:21:41.852 15:58:44 -- accel/accel.sh@21 -- # val= 00:21:41.852 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:21:41.852 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:21:41.852 15:58:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:41.852 15:58:44 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:21:41.852 15:58:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:41.852 00:21:41.852 real 0m2.803s 00:21:41.852 user 0m2.446s 00:21:41.852 sys 0m0.149s 00:21:41.852 15:58:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.852 15:58:44 -- common/autotest_common.sh@10 -- # set +x 00:21:41.852 ************************************ 00:21:41.852 END TEST accel_dif_verify 00:21:41.852 ************************************ 00:21:42.111 15:58:44 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:21:42.111 15:58:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:42.111 15:58:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:42.111 15:58:44 -- common/autotest_common.sh@10 -- # set +x 00:21:42.111 ************************************ 00:21:42.111 START TEST accel_dif_generate 00:21:42.111 ************************************ 00:21:42.111 15:58:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:21:42.111 15:58:44 -- accel/accel.sh@16 -- # local accel_opc 00:21:42.111 15:58:44 -- accel/accel.sh@17 -- # local accel_module 00:21:42.111 15:58:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:21:42.111 15:58:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:21:42.111 15:58:44 -- accel/accel.sh@12 -- # build_accel_config 00:21:42.111 15:58:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:42.111 15:58:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:42.111 15:58:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:42.111 15:58:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:42.111 15:58:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:42.111 15:58:44 -- accel/accel.sh@41 -- # local IFS=, 00:21:42.111 15:58:44 -- accel/accel.sh@42 -- # jq -r . 00:21:42.111 [2024-07-22 15:58:44.775932] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:42.111 [2024-07-22 15:58:44.776626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56616 ] 00:21:42.111 [2024-07-22 15:58:44.912513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.111 [2024-07-22 15:58:44.969112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.488 15:58:46 -- accel/accel.sh@18 -- # out=' 00:21:43.488 SPDK Configuration: 00:21:43.488 Core mask: 0x1 00:21:43.488 00:21:43.488 Accel Perf Configuration: 00:21:43.488 Workload Type: dif_generate 00:21:43.488 Vector size: 4096 bytes 00:21:43.488 Transfer size: 4096 bytes 00:21:43.488 Block size: 512 bytes 00:21:43.488 Metadata size: 8 bytes 00:21:43.488 Vector count 1 00:21:43.488 Module: software 00:21:43.488 Queue depth: 32 00:21:43.488 Allocate depth: 32 00:21:43.488 # threads/core: 1 00:21:43.488 Run time: 1 seconds 00:21:43.488 Verify: No 00:21:43.488 00:21:43.488 Running for 1 seconds... 00:21:43.488 00:21:43.488 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:43.488 ------------------------------------------------------------------------------------ 00:21:43.488 0,0 114656/s 454 MiB/s 0 0 00:21:43.488 ==================================================================================== 00:21:43.488 Total 114656/s 447 MiB/s 0 0' 00:21:43.488 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.488 15:58:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:21:43.488 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.488 15:58:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:21:43.488 15:58:46 -- accel/accel.sh@12 -- # build_accel_config 00:21:43.488 15:58:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:43.488 15:58:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:43.488 15:58:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:43.488 15:58:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:43.488 15:58:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:43.488 15:58:46 -- accel/accel.sh@41 -- # local IFS=, 00:21:43.488 15:58:46 -- accel/accel.sh@42 -- # jq -r . 00:21:43.488 [2024-07-22 15:58:46.157232] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:43.488 [2024-07-22 15:58:46.157334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56641 ] 00:21:43.488 [2024-07-22 15:58:46.298406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.747 [2024-07-22 15:58:46.356026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val= 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val= 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val=0x1 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val= 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val= 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val=dif_generate 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val='512 bytes' 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val='8 bytes' 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val= 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.747 15:58:46 -- accel/accel.sh@21 -- # val=software 00:21:43.747 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.747 15:58:46 -- accel/accel.sh@23 -- # accel_module=software 00:21:43.747 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.748 15:58:46 -- accel/accel.sh@21 -- # val=32 00:21:43.748 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.748 15:58:46 -- accel/accel.sh@21 -- # val=32 00:21:43.748 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.748 15:58:46 -- accel/accel.sh@21 -- # val=1 00:21:43.748 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.748 15:58:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:43.748 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.748 15:58:46 -- accel/accel.sh@21 -- # val=No 00:21:43.748 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.748 15:58:46 -- accel/accel.sh@21 -- # val= 00:21:43.748 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:43.748 15:58:46 -- accel/accel.sh@21 -- # val= 00:21:43.748 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:21:43.748 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:21:44.683 15:58:47 -- accel/accel.sh@21 -- # val= 00:21:44.683 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:21:44.683 15:58:47 -- accel/accel.sh@21 -- # val= 00:21:44.683 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:21:44.683 15:58:47 -- accel/accel.sh@21 -- # val= 00:21:44.683 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:21:44.683 15:58:47 -- accel/accel.sh@21 -- # val= 00:21:44.683 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:21:44.683 15:58:47 -- accel/accel.sh@21 -- # val= 00:21:44.683 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:21:44.683 15:58:47 -- accel/accel.sh@21 -- # val= 00:21:44.683 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:21:44.683 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:21:44.683 15:58:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:44.683 15:58:47 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:21:44.683 15:58:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:44.683 00:21:44.683 real 0m2.778s 00:21:44.683 user 0m2.437s 00:21:44.683 sys 0m0.139s 00:21:44.683 15:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.683 ************************************ 00:21:44.683 END TEST accel_dif_generate 00:21:44.683 ************************************ 00:21:44.683 15:58:47 -- common/autotest_common.sh@10 -- # set +x 00:21:44.942 15:58:47 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:21:44.942 15:58:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:44.942 15:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:44.942 15:58:47 -- common/autotest_common.sh@10 -- # set +x 00:21:44.942 ************************************ 00:21:44.942 START TEST accel_dif_generate_copy 00:21:44.942 ************************************ 00:21:44.942 15:58:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:21:44.942 15:58:47 -- accel/accel.sh@16 -- # local accel_opc 00:21:44.942 15:58:47 -- accel/accel.sh@17 -- # local accel_module 00:21:44.942 15:58:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:21:44.942 15:58:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:21:44.942 15:58:47 -- accel/accel.sh@12 -- # build_accel_config 00:21:44.942 15:58:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:44.942 15:58:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:44.942 15:58:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:44.942 15:58:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:44.942 15:58:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:44.942 15:58:47 -- accel/accel.sh@41 -- # local IFS=, 00:21:44.943 15:58:47 -- accel/accel.sh@42 -- # jq -r . 00:21:44.943 [2024-07-22 15:58:47.600583] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:44.943 [2024-07-22 15:58:47.600682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56670 ] 00:21:44.943 [2024-07-22 15:58:47.739932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.213 [2024-07-22 15:58:47.808460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.148 15:58:48 -- accel/accel.sh@18 -- # out=' 00:21:46.148 SPDK Configuration: 00:21:46.148 Core mask: 0x1 00:21:46.148 00:21:46.148 Accel Perf Configuration: 00:21:46.148 Workload Type: dif_generate_copy 00:21:46.148 Vector size: 4096 bytes 00:21:46.148 Transfer size: 4096 bytes 00:21:46.148 Vector count 1 00:21:46.148 Module: software 00:21:46.148 Queue depth: 32 00:21:46.148 Allocate depth: 32 00:21:46.148 # threads/core: 1 00:21:46.148 Run time: 1 seconds 00:21:46.148 Verify: No 00:21:46.148 00:21:46.148 Running for 1 seconds... 00:21:46.148 00:21:46.148 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:46.148 ------------------------------------------------------------------------------------ 00:21:46.148 0,0 75360/s 298 MiB/s 0 0 00:21:46.148 ==================================================================================== 00:21:46.148 Total 75360/s 294 MiB/s 0 0' 00:21:46.148 15:58:48 -- accel/accel.sh@20 -- # IFS=: 00:21:46.148 15:58:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:21:46.148 15:58:48 -- accel/accel.sh@20 -- # read -r var val 00:21:46.148 15:58:48 -- accel/accel.sh@12 -- # build_accel_config 00:21:46.148 15:58:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:21:46.148 15:58:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:46.148 15:58:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:46.148 15:58:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:46.148 15:58:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:46.148 15:58:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:46.148 15:58:48 -- accel/accel.sh@41 -- # local IFS=, 00:21:46.148 15:58:48 -- accel/accel.sh@42 -- # jq -r . 00:21:46.148 [2024-07-22 15:58:49.006309] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:46.148 [2024-07-22 15:58:49.006462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56684 ] 00:21:46.406 [2024-07-22 15:58:49.145229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.406 [2024-07-22 15:58:49.211615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.406 15:58:49 -- accel/accel.sh@21 -- # val= 00:21:46.406 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.406 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.406 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.406 15:58:49 -- accel/accel.sh@21 -- # val= 00:21:46.406 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.406 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.406 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.406 15:58:49 -- accel/accel.sh@21 -- # val=0x1 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val= 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val= 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val= 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val=software 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@23 -- # accel_module=software 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val=32 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val=32 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val=1 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val=No 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val= 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:46.407 15:58:49 -- accel/accel.sh@21 -- # val= 00:21:46.407 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:21:46.407 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:21:47.783 15:58:50 -- accel/accel.sh@21 -- # val= 00:21:47.783 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:21:47.783 15:58:50 -- accel/accel.sh@21 -- # val= 00:21:47.783 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:21:47.783 15:58:50 -- accel/accel.sh@21 -- # val= 00:21:47.783 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:21:47.783 15:58:50 -- accel/accel.sh@21 -- # val= 00:21:47.783 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:21:47.783 15:58:50 -- accel/accel.sh@21 -- # val= 00:21:47.783 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:21:47.783 15:58:50 -- accel/accel.sh@21 -- # val= 00:21:47.783 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:21:47.783 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:21:47.783 15:58:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:47.783 15:58:50 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:21:47.783 15:58:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:47.783 00:21:47.783 real 0m2.803s 00:21:47.783 user 0m2.447s 00:21:47.783 sys 0m0.149s 00:21:47.783 15:58:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.783 ************************************ 00:21:47.783 END TEST accel_dif_generate_copy 00:21:47.783 15:58:50 -- common/autotest_common.sh@10 -- # set +x 00:21:47.783 ************************************ 00:21:47.783 15:58:50 -- accel/accel.sh@107 -- # [[ y == y ]] 00:21:47.783 15:58:50 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:47.783 15:58:50 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:21:47.783 15:58:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:47.783 15:58:50 -- common/autotest_common.sh@10 -- # set +x 00:21:47.783 ************************************ 00:21:47.783 START TEST accel_comp 00:21:47.783 ************************************ 00:21:47.783 15:58:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:47.783 15:58:50 -- accel/accel.sh@16 -- # local accel_opc 00:21:47.783 15:58:50 -- accel/accel.sh@17 -- # local accel_module 00:21:47.783 15:58:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:47.783 15:58:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:47.783 15:58:50 -- accel/accel.sh@12 -- # build_accel_config 00:21:47.783 15:58:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:47.783 15:58:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:47.783 15:58:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:47.783 15:58:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:47.783 15:58:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:47.783 15:58:50 -- accel/accel.sh@41 -- # local IFS=, 00:21:47.783 15:58:50 -- accel/accel.sh@42 -- # jq -r . 00:21:47.783 [2024-07-22 15:58:50.447901] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:47.783 [2024-07-22 15:58:50.447994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56724 ] 00:21:47.783 [2024-07-22 15:58:50.585518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.041 [2024-07-22 15:58:50.666822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.414 15:58:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:21:49.414 00:21:49.414 SPDK Configuration: 00:21:49.414 Core mask: 0x1 00:21:49.414 00:21:49.414 Accel Perf Configuration: 00:21:49.414 Workload Type: compress 00:21:49.414 Transfer size: 4096 bytes 00:21:49.414 Vector count 1 00:21:49.414 Module: software 00:21:49.414 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:49.414 Queue depth: 32 00:21:49.414 Allocate depth: 32 00:21:49.414 # threads/core: 1 00:21:49.414 Run time: 1 seconds 00:21:49.414 Verify: No 00:21:49.414 00:21:49.414 Running for 1 seconds... 00:21:49.414 00:21:49.414 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:49.414 ------------------------------------------------------------------------------------ 00:21:49.414 0,0 44576/s 185 MiB/s 0 0 00:21:49.414 ==================================================================================== 00:21:49.414 Total 44576/s 174 MiB/s 0 0' 00:21:49.414 15:58:51 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:51 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:49.414 15:58:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:49.414 15:58:51 -- accel/accel.sh@12 -- # build_accel_config 00:21:49.414 15:58:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:49.414 15:58:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:49.414 15:58:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:49.414 15:58:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:49.414 15:58:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:49.414 15:58:51 -- accel/accel.sh@41 -- # local IFS=, 00:21:49.414 15:58:51 -- accel/accel.sh@42 -- # jq -r . 00:21:49.414 [2024-07-22 15:58:51.872168] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:49.414 [2024-07-22 15:58:51.872259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56740 ] 00:21:49.414 [2024-07-22 15:58:52.009823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.414 [2024-07-22 15:58:52.099922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val= 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val= 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val= 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val=0x1 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val= 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val= 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val=compress 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@24 -- # accel_opc=compress 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val= 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val=software 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@23 -- # accel_module=software 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val=32 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val=32 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val=1 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val=No 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val= 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:49.414 15:58:52 -- accel/accel.sh@21 -- # val= 00:21:49.414 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:21:49.414 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:21:50.789 15:58:53 -- accel/accel.sh@21 -- # val= 00:21:50.789 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:21:50.789 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:21:50.789 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:21:50.789 15:58:53 -- accel/accel.sh@21 -- # val= 00:21:50.789 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:21:50.789 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:21:50.789 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:21:50.789 15:58:53 -- accel/accel.sh@21 -- # val= 00:21:50.789 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:21:50.789 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:21:50.789 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:21:50.789 15:58:53 -- accel/accel.sh@21 -- # val= 00:21:50.789 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:21:50.789 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:21:50.789 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:21:50.789 15:58:53 -- accel/accel.sh@21 -- # val= 00:21:50.790 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:21:50.790 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:21:50.790 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:21:50.790 15:58:53 -- accel/accel.sh@21 -- # val= 00:21:50.790 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:21:50.790 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:21:50.790 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:21:50.790 15:58:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:50.790 15:58:53 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:21:50.790 15:58:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:50.790 00:21:50.790 real 0m2.859s 00:21:50.790 user 0m2.484s 00:21:50.790 sys 0m0.167s 00:21:50.790 15:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.790 ************************************ 00:21:50.790 END TEST accel_comp 00:21:50.790 ************************************ 00:21:50.790 15:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:50.790 15:58:53 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:50.790 15:58:53 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:21:50.790 15:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:50.790 15:58:53 -- common/autotest_common.sh@10 -- # set +x 00:21:50.790 ************************************ 00:21:50.790 START TEST accel_decomp 00:21:50.790 ************************************ 00:21:50.790 15:58:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:50.790 15:58:53 -- accel/accel.sh@16 -- # local accel_opc 00:21:50.790 15:58:53 -- accel/accel.sh@17 -- # local accel_module 00:21:50.790 15:58:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:50.790 15:58:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:50.790 15:58:53 -- accel/accel.sh@12 -- # build_accel_config 00:21:50.790 15:58:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:50.790 15:58:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:50.790 15:58:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:50.790 15:58:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:50.790 15:58:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:50.790 15:58:53 -- accel/accel.sh@41 -- # local IFS=, 00:21:50.790 15:58:53 -- accel/accel.sh@42 -- # jq -r . 00:21:50.790 [2024-07-22 15:58:53.352265] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:50.790 [2024-07-22 15:58:53.352363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56779 ] 00:21:50.790 [2024-07-22 15:58:53.486093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.790 [2024-07-22 15:58:53.546542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.165 15:58:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:21:52.165 00:21:52.165 SPDK Configuration: 00:21:52.165 Core mask: 0x1 00:21:52.165 00:21:52.165 Accel Perf Configuration: 00:21:52.165 Workload Type: decompress 00:21:52.165 Transfer size: 4096 bytes 00:21:52.165 Vector count 1 00:21:52.165 Module: software 00:21:52.165 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:52.165 Queue depth: 32 00:21:52.165 Allocate depth: 32 00:21:52.165 # threads/core: 1 00:21:52.165 Run time: 1 seconds 00:21:52.165 Verify: Yes 00:21:52.165 00:21:52.165 Running for 1 seconds... 00:21:52.165 00:21:52.165 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:52.165 ------------------------------------------------------------------------------------ 00:21:52.165 0,0 62944/s 115 MiB/s 0 0 00:21:52.165 ==================================================================================== 00:21:52.165 Total 62944/s 245 MiB/s 0 0' 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:52.165 15:58:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:21:52.165 15:58:54 -- accel/accel.sh@12 -- # build_accel_config 00:21:52.165 15:58:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:52.165 15:58:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:52.165 15:58:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:52.165 15:58:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:52.165 15:58:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:52.165 15:58:54 -- accel/accel.sh@41 -- # local IFS=, 00:21:52.165 15:58:54 -- accel/accel.sh@42 -- # jq -r . 00:21:52.165 [2024-07-22 15:58:54.747527] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:52.165 [2024-07-22 15:58:54.747621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56795 ] 00:21:52.165 [2024-07-22 15:58:54.883773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.165 [2024-07-22 15:58:54.951692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val= 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val= 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val= 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val=0x1 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val= 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val= 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val=decompress 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val= 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val=software 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@23 -- # accel_module=software 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val=32 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val=32 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val=1 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val=Yes 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val= 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:52.165 15:58:54 -- accel/accel.sh@21 -- # val= 00:21:52.165 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:21:52.165 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:21:53.570 15:58:56 -- accel/accel.sh@21 -- # val= 00:21:53.570 15:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # IFS=: 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # read -r var val 00:21:53.570 15:58:56 -- accel/accel.sh@21 -- # val= 00:21:53.570 15:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # IFS=: 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # read -r var val 00:21:53.570 15:58:56 -- accel/accel.sh@21 -- # val= 00:21:53.570 15:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # IFS=: 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # read -r var val 00:21:53.570 15:58:56 -- accel/accel.sh@21 -- # val= 00:21:53.570 15:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # IFS=: 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # read -r var val 00:21:53.570 15:58:56 -- accel/accel.sh@21 -- # val= 00:21:53.570 15:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # IFS=: 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # read -r var val 00:21:53.570 15:58:56 -- accel/accel.sh@21 -- # val= 00:21:53.570 15:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # IFS=: 00:21:53.570 15:58:56 -- accel/accel.sh@20 -- # read -r var val 00:21:53.571 15:58:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:53.571 15:58:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:21:53.571 15:58:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:53.571 00:21:53.571 real 0m2.803s 00:21:53.571 user 0m2.445s 00:21:53.571 sys 0m0.150s 00:21:53.571 15:58:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.571 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:21:53.571 ************************************ 00:21:53.571 END TEST accel_decomp 00:21:53.571 ************************************ 00:21:53.571 15:58:56 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:53.571 15:58:56 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:21:53.571 15:58:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:53.571 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:21:53.571 ************************************ 00:21:53.571 START TEST accel_decmop_full 00:21:53.571 ************************************ 00:21:53.571 15:58:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:53.571 15:58:56 -- accel/accel.sh@16 -- # local accel_opc 00:21:53.571 15:58:56 -- accel/accel.sh@17 -- # local accel_module 00:21:53.571 15:58:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:53.571 15:58:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:53.571 15:58:56 -- accel/accel.sh@12 -- # build_accel_config 00:21:53.571 15:58:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:53.571 15:58:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:53.571 15:58:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:53.571 15:58:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:53.571 15:58:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:53.571 15:58:56 -- accel/accel.sh@41 -- # local IFS=, 00:21:53.571 15:58:56 -- accel/accel.sh@42 -- # jq -r . 00:21:53.571 [2024-07-22 15:58:56.195180] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:53.571 [2024-07-22 15:58:56.195269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56824 ] 00:21:53.571 [2024-07-22 15:58:56.332358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.571 [2024-07-22 15:58:56.400791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.947 15:58:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:21:54.947 00:21:54.947 SPDK Configuration: 00:21:54.947 Core mask: 0x1 00:21:54.947 00:21:54.947 Accel Perf Configuration: 00:21:54.947 Workload Type: decompress 00:21:54.947 Transfer size: 111250 bytes 00:21:54.947 Vector count 1 00:21:54.947 Module: software 00:21:54.947 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:54.947 Queue depth: 32 00:21:54.947 Allocate depth: 32 00:21:54.947 # threads/core: 1 00:21:54.947 Run time: 1 seconds 00:21:54.947 Verify: Yes 00:21:54.947 00:21:54.947 Running for 1 seconds... 00:21:54.947 00:21:54.947 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:54.948 ------------------------------------------------------------------------------------ 00:21:54.948 0,0 4096/s 169 MiB/s 0 0 00:21:54.948 ==================================================================================== 00:21:54.948 Total 4096/s 434 MiB/s 0 0' 00:21:54.948 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:54.948 15:58:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:54.948 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:54.948 15:58:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:21:54.948 15:58:57 -- accel/accel.sh@12 -- # build_accel_config 00:21:54.948 15:58:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:54.948 15:58:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:54.948 15:58:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:54.948 15:58:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:54.948 15:58:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:54.948 15:58:57 -- accel/accel.sh@41 -- # local IFS=, 00:21:54.948 15:58:57 -- accel/accel.sh@42 -- # jq -r . 00:21:54.948 [2024-07-22 15:58:57.602709] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:54.948 [2024-07-22 15:58:57.602791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56849 ] 00:21:54.948 [2024-07-22 15:58:57.737253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.948 [2024-07-22 15:58:57.804299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val= 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val= 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val= 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val=0x1 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val= 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val= 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val=decompress 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val='111250 bytes' 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val= 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val=software 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@23 -- # accel_module=software 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:55.206 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.206 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.206 15:58:57 -- accel/accel.sh@21 -- # val=32 00:21:55.207 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.207 15:58:57 -- accel/accel.sh@21 -- # val=32 00:21:55.207 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.207 15:58:57 -- accel/accel.sh@21 -- # val=1 00:21:55.207 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.207 15:58:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:55.207 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.207 15:58:57 -- accel/accel.sh@21 -- # val=Yes 00:21:55.207 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.207 15:58:57 -- accel/accel.sh@21 -- # val= 00:21:55.207 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:55.207 15:58:57 -- accel/accel.sh@21 -- # val= 00:21:55.207 15:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # IFS=: 00:21:55.207 15:58:57 -- accel/accel.sh@20 -- # read -r var val 00:21:56.143 15:58:58 -- accel/accel.sh@21 -- # val= 00:21:56.143 15:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # IFS=: 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # read -r var val 00:21:56.143 15:58:58 -- accel/accel.sh@21 -- # val= 00:21:56.143 15:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # IFS=: 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # read -r var val 00:21:56.143 15:58:58 -- accel/accel.sh@21 -- # val= 00:21:56.143 15:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # IFS=: 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # read -r var val 00:21:56.143 15:58:58 -- accel/accel.sh@21 -- # val= 00:21:56.143 15:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # IFS=: 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # read -r var val 00:21:56.143 15:58:58 -- accel/accel.sh@21 -- # val= 00:21:56.143 15:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # IFS=: 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # read -r var val 00:21:56.143 15:58:58 -- accel/accel.sh@21 -- # val= 00:21:56.143 15:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # IFS=: 00:21:56.143 15:58:58 -- accel/accel.sh@20 -- # read -r var val 00:21:56.143 15:58:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:56.143 15:58:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:21:56.143 15:58:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:56.143 00:21:56.143 real 0m2.812s 00:21:56.143 user 0m2.454s 00:21:56.143 sys 0m0.147s 00:21:56.143 15:58:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.143 15:58:58 -- common/autotest_common.sh@10 -- # set +x 00:21:56.143 ************************************ 00:21:56.143 END TEST accel_decmop_full 00:21:56.143 ************************************ 00:21:56.412 15:58:59 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:56.412 15:58:59 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:21:56.412 15:58:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:56.412 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:21:56.412 ************************************ 00:21:56.412 START TEST accel_decomp_mcore 00:21:56.412 ************************************ 00:21:56.412 15:58:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:56.412 15:58:59 -- accel/accel.sh@16 -- # local accel_opc 00:21:56.412 15:58:59 -- accel/accel.sh@17 -- # local accel_module 00:21:56.412 15:58:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:56.412 15:58:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:56.412 15:58:59 -- accel/accel.sh@12 -- # build_accel_config 00:21:56.412 15:58:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:56.412 15:58:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:56.412 15:58:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:56.412 15:58:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:56.412 15:58:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:56.412 15:58:59 -- accel/accel.sh@41 -- # local IFS=, 00:21:56.412 15:58:59 -- accel/accel.sh@42 -- # jq -r . 00:21:56.412 [2024-07-22 15:58:59.042000] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:56.412 [2024-07-22 15:58:59.042090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56879 ] 00:21:56.412 [2024-07-22 15:58:59.178404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.412 [2024-07-22 15:58:59.240118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.412 [2024-07-22 15:58:59.240180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.412 [2024-07-22 15:58:59.240266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.412 [2024-07-22 15:58:59.240279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.825 15:59:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:21:57.825 00:21:57.825 SPDK Configuration: 00:21:57.825 Core mask: 0xf 00:21:57.825 00:21:57.825 Accel Perf Configuration: 00:21:57.825 Workload Type: decompress 00:21:57.825 Transfer size: 4096 bytes 00:21:57.825 Vector count 1 00:21:57.825 Module: software 00:21:57.825 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:57.825 Queue depth: 32 00:21:57.825 Allocate depth: 32 00:21:57.825 # threads/core: 1 00:21:57.825 Run time: 1 seconds 00:21:57.825 Verify: Yes 00:21:57.825 00:21:57.825 Running for 1 seconds... 00:21:57.825 00:21:57.825 Core,Thread Transfers Bandwidth Failed Miscompares 00:21:57.825 ------------------------------------------------------------------------------------ 00:21:57.825 0,0 47296/s 87 MiB/s 0 0 00:21:57.825 3,0 57376/s 105 MiB/s 0 0 00:21:57.825 2,0 45728/s 84 MiB/s 0 0 00:21:57.825 1,0 53376/s 98 MiB/s 0 0 00:21:57.825 ==================================================================================== 00:21:57.825 Total 203776/s 796 MiB/s 0 0' 00:21:57.825 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:57.825 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:57.825 15:59:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:57.825 15:59:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:21:57.825 15:59:00 -- accel/accel.sh@12 -- # build_accel_config 00:21:57.825 15:59:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:57.825 15:59:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:57.825 15:59:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:57.825 15:59:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:57.825 15:59:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:57.825 15:59:00 -- accel/accel.sh@41 -- # local IFS=, 00:21:57.825 15:59:00 -- accel/accel.sh@42 -- # jq -r . 00:21:57.825 [2024-07-22 15:59:00.467987] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:57.825 [2024-07-22 15:59:00.468069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56902 ] 00:21:57.825 [2024-07-22 15:59:00.600838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.825 [2024-07-22 15:59:00.662703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.825 [2024-07-22 15:59:00.662788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.825 [2024-07-22 15:59:00.662847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.825 [2024-07-22 15:59:00.662855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val= 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val= 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val= 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val=0xf 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val= 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val= 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val=decompress 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val= 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val=software 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@23 -- # accel_module=software 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val=32 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val=32 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val=1 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val=Yes 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val= 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:58.084 15:59:00 -- accel/accel.sh@21 -- # val= 00:21:58.084 15:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # IFS=: 00:21:58.084 15:59:00 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@21 -- # val= 00:21:59.018 15:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # IFS=: 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@21 -- # val= 00:21:59.018 15:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # IFS=: 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@21 -- # val= 00:21:59.018 15:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # IFS=: 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@21 -- # val= 00:21:59.018 15:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # IFS=: 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@21 -- # val= 00:21:59.018 15:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # IFS=: 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@21 -- # val= 00:21:59.018 15:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # IFS=: 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@21 -- # val= 00:21:59.018 15:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # IFS=: 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@21 -- # val= 00:21:59.018 15:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # IFS=: 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@21 -- # val= 00:21:59.018 15:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # IFS=: 00:21:59.018 15:59:01 -- accel/accel.sh@20 -- # read -r var val 00:21:59.018 15:59:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:21:59.018 15:59:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:21:59.018 15:59:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:59.018 00:21:59.018 real 0m2.826s 00:21:59.018 user 0m8.984s 00:21:59.018 sys 0m0.164s 00:21:59.018 15:59:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.018 15:59:01 -- common/autotest_common.sh@10 -- # set +x 00:21:59.018 ************************************ 00:21:59.018 END TEST accel_decomp_mcore 00:21:59.018 ************************************ 00:21:59.276 15:59:01 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:59.276 15:59:01 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:21:59.276 15:59:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:59.276 15:59:01 -- common/autotest_common.sh@10 -- # set +x 00:21:59.276 ************************************ 00:21:59.276 START TEST accel_decomp_full_mcore 00:21:59.276 ************************************ 00:21:59.276 15:59:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:59.276 15:59:01 -- accel/accel.sh@16 -- # local accel_opc 00:21:59.276 15:59:01 -- accel/accel.sh@17 -- # local accel_module 00:21:59.276 15:59:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:59.276 15:59:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:59.276 15:59:01 -- accel/accel.sh@12 -- # build_accel_config 00:21:59.276 15:59:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:21:59.276 15:59:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:59.276 15:59:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:59.276 15:59:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:21:59.276 15:59:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:21:59.276 15:59:01 -- accel/accel.sh@41 -- # local IFS=, 00:21:59.276 15:59:01 -- accel/accel.sh@42 -- # jq -r . 00:21:59.276 [2024-07-22 15:59:01.920416] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:59.276 [2024-07-22 15:59:01.920547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56939 ] 00:21:59.276 [2024-07-22 15:59:02.058648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.276 [2024-07-22 15:59:02.119273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.276 [2024-07-22 15:59:02.119418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.276 [2024-07-22 15:59:02.119664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.276 [2024-07-22 15:59:02.119539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.666 15:59:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:22:00.666 00:22:00.666 SPDK Configuration: 00:22:00.666 Core mask: 0xf 00:22:00.666 00:22:00.666 Accel Perf Configuration: 00:22:00.666 Workload Type: decompress 00:22:00.666 Transfer size: 111250 bytes 00:22:00.666 Vector count 1 00:22:00.666 Module: software 00:22:00.666 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:00.666 Queue depth: 32 00:22:00.666 Allocate depth: 32 00:22:00.666 # threads/core: 1 00:22:00.666 Run time: 1 seconds 00:22:00.666 Verify: Yes 00:22:00.666 00:22:00.666 Running for 1 seconds... 00:22:00.666 00:22:00.666 Core,Thread Transfers Bandwidth Failed Miscompares 00:22:00.666 ------------------------------------------------------------------------------------ 00:22:00.666 0,0 4256/s 175 MiB/s 0 0 00:22:00.666 3,0 4096/s 169 MiB/s 0 0 00:22:00.666 2,0 3776/s 155 MiB/s 0 0 00:22:00.666 1,0 3968/s 163 MiB/s 0 0 00:22:00.666 ==================================================================================== 00:22:00.666 Total 16096/s 1707 MiB/s 0 0' 00:22:00.666 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.666 15:59:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:22:00.666 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.666 15:59:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:22:00.666 15:59:03 -- accel/accel.sh@12 -- # build_accel_config 00:22:00.666 15:59:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:22:00.666 15:59:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:00.666 15:59:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:00.666 15:59:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:22:00.666 15:59:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:22:00.666 15:59:03 -- accel/accel.sh@41 -- # local IFS=, 00:22:00.666 15:59:03 -- accel/accel.sh@42 -- # jq -r . 00:22:00.666 [2024-07-22 15:59:03.333254] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:00.666 [2024-07-22 15:59:03.333333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56962 ] 00:22:00.666 [2024-07-22 15:59:03.467295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.924 [2024-07-22 15:59:03.537282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.924 [2024-07-22 15:59:03.537401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.924 [2024-07-22 15:59:03.537549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.924 [2024-07-22 15:59:03.537550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.924 15:59:03 -- accel/accel.sh@21 -- # val= 00:22:00.924 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.924 15:59:03 -- accel/accel.sh@21 -- # val= 00:22:00.924 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.924 15:59:03 -- accel/accel.sh@21 -- # val= 00:22:00.924 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.924 15:59:03 -- accel/accel.sh@21 -- # val=0xf 00:22:00.924 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.924 15:59:03 -- accel/accel.sh@21 -- # val= 00:22:00.924 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.924 15:59:03 -- accel/accel.sh@21 -- # val= 00:22:00.924 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.924 15:59:03 -- accel/accel.sh@21 -- # val=decompress 00:22:00.924 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.924 15:59:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.924 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.924 15:59:03 -- accel/accel.sh@21 -- # val='111250 bytes' 00:22:00.924 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val= 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val=software 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@23 -- # accel_module=software 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val=32 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val=32 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val=1 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val=Yes 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val= 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:00.925 15:59:03 -- accel/accel.sh@21 -- # val= 00:22:00.925 15:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # IFS=: 00:22:00.925 15:59:03 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@21 -- # val= 00:22:02.301 15:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # IFS=: 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@21 -- # val= 00:22:02.301 15:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # IFS=: 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@21 -- # val= 00:22:02.301 15:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # IFS=: 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@21 -- # val= 00:22:02.301 15:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # IFS=: 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@21 -- # val= 00:22:02.301 15:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # IFS=: 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@21 -- # val= 00:22:02.301 15:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # IFS=: 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@21 -- # val= 00:22:02.301 15:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # IFS=: 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@21 -- # val= 00:22:02.301 15:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # IFS=: 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@21 -- # val= 00:22:02.301 15:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # IFS=: 00:22:02.301 15:59:04 -- accel/accel.sh@20 -- # read -r var val 00:22:02.301 15:59:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:22:02.301 ************************************ 00:22:02.301 END TEST accel_decomp_full_mcore 00:22:02.301 ************************************ 00:22:02.301 15:59:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:22:02.301 15:59:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:02.301 00:22:02.301 real 0m2.846s 00:22:02.301 user 0m9.011s 00:22:02.301 sys 0m0.163s 00:22:02.301 15:59:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:02.301 15:59:04 -- common/autotest_common.sh@10 -- # set +x 00:22:02.301 15:59:04 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:02.301 15:59:04 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:22:02.301 15:59:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:02.301 15:59:04 -- common/autotest_common.sh@10 -- # set +x 00:22:02.301 ************************************ 00:22:02.301 START TEST accel_decomp_mthread 00:22:02.301 ************************************ 00:22:02.301 15:59:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:02.301 15:59:04 -- accel/accel.sh@16 -- # local accel_opc 00:22:02.301 15:59:04 -- accel/accel.sh@17 -- # local accel_module 00:22:02.301 15:59:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:02.301 15:59:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:02.301 15:59:04 -- accel/accel.sh@12 -- # build_accel_config 00:22:02.301 15:59:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:22:02.301 15:59:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:02.301 15:59:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:02.301 15:59:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:22:02.301 15:59:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:22:02.301 15:59:04 -- accel/accel.sh@41 -- # local IFS=, 00:22:02.301 15:59:04 -- accel/accel.sh@42 -- # jq -r . 00:22:02.301 [2024-07-22 15:59:04.811951] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:02.301 [2024-07-22 15:59:04.812045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56994 ] 00:22:02.301 [2024-07-22 15:59:04.949155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.301 [2024-07-22 15:59:05.009358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.685 15:59:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:22:03.685 00:22:03.685 SPDK Configuration: 00:22:03.685 Core mask: 0x1 00:22:03.685 00:22:03.685 Accel Perf Configuration: 00:22:03.685 Workload Type: decompress 00:22:03.685 Transfer size: 4096 bytes 00:22:03.685 Vector count 1 00:22:03.685 Module: software 00:22:03.685 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:03.685 Queue depth: 32 00:22:03.685 Allocate depth: 32 00:22:03.685 # threads/core: 2 00:22:03.685 Run time: 1 seconds 00:22:03.685 Verify: Yes 00:22:03.685 00:22:03.685 Running for 1 seconds... 00:22:03.685 00:22:03.685 Core,Thread Transfers Bandwidth Failed Miscompares 00:22:03.685 ------------------------------------------------------------------------------------ 00:22:03.685 0,1 31072/s 57 MiB/s 0 0 00:22:03.685 0,0 31008/s 57 MiB/s 0 0 00:22:03.685 ==================================================================================== 00:22:03.685 Total 62080/s 242 MiB/s 0 0' 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:03.685 15:59:06 -- accel/accel.sh@12 -- # build_accel_config 00:22:03.685 15:59:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:22:03.685 15:59:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:22:03.685 15:59:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:03.685 15:59:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:03.685 15:59:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:22:03.685 15:59:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:22:03.685 15:59:06 -- accel/accel.sh@41 -- # local IFS=, 00:22:03.685 15:59:06 -- accel/accel.sh@42 -- # jq -r . 00:22:03.685 [2024-07-22 15:59:06.210234] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:03.685 [2024-07-22 15:59:06.210339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57013 ] 00:22:03.685 [2024-07-22 15:59:06.344538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.685 [2024-07-22 15:59:06.402999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val= 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val= 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val= 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val=0x1 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val= 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val= 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val=decompress 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val= 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val=software 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@23 -- # accel_module=software 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val=32 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val=32 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val=2 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.685 15:59:06 -- accel/accel.sh@21 -- # val=Yes 00:22:03.685 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.685 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.686 15:59:06 -- accel/accel.sh@21 -- # val= 00:22:03.686 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.686 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.686 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:03.686 15:59:06 -- accel/accel.sh@21 -- # val= 00:22:03.686 15:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:22:03.686 15:59:06 -- accel/accel.sh@20 -- # IFS=: 00:22:03.686 15:59:06 -- accel/accel.sh@20 -- # read -r var val 00:22:05.061 15:59:07 -- accel/accel.sh@21 -- # val= 00:22:05.061 15:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # IFS=: 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # read -r var val 00:22:05.061 15:59:07 -- accel/accel.sh@21 -- # val= 00:22:05.061 15:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # IFS=: 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # read -r var val 00:22:05.061 15:59:07 -- accel/accel.sh@21 -- # val= 00:22:05.061 15:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # IFS=: 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # read -r var val 00:22:05.061 15:59:07 -- accel/accel.sh@21 -- # val= 00:22:05.061 15:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # IFS=: 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # read -r var val 00:22:05.061 15:59:07 -- accel/accel.sh@21 -- # val= 00:22:05.061 15:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # IFS=: 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # read -r var val 00:22:05.061 15:59:07 -- accel/accel.sh@21 -- # val= 00:22:05.061 15:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # IFS=: 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # read -r var val 00:22:05.061 15:59:07 -- accel/accel.sh@21 -- # val= 00:22:05.061 15:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # IFS=: 00:22:05.061 15:59:07 -- accel/accel.sh@20 -- # read -r var val 00:22:05.061 15:59:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:22:05.061 15:59:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:22:05.061 15:59:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:05.061 00:22:05.061 real 0m2.795s 00:22:05.061 user 0m2.446s 00:22:05.061 sys 0m0.143s 00:22:05.061 15:59:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.061 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:22:05.061 ************************************ 00:22:05.061 END TEST accel_decomp_mthread 00:22:05.061 ************************************ 00:22:05.061 15:59:07 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:05.061 15:59:07 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:22:05.061 15:59:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:05.061 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:22:05.061 ************************************ 00:22:05.061 START TEST accel_deomp_full_mthread 00:22:05.061 ************************************ 00:22:05.061 15:59:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:05.061 15:59:07 -- accel/accel.sh@16 -- # local accel_opc 00:22:05.061 15:59:07 -- accel/accel.sh@17 -- # local accel_module 00:22:05.061 15:59:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:05.061 15:59:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:05.061 15:59:07 -- accel/accel.sh@12 -- # build_accel_config 00:22:05.061 15:59:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:22:05.061 15:59:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:05.061 15:59:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:05.061 15:59:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:22:05.061 15:59:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:22:05.061 15:59:07 -- accel/accel.sh@41 -- # local IFS=, 00:22:05.061 15:59:07 -- accel/accel.sh@42 -- # jq -r . 00:22:05.061 [2024-07-22 15:59:07.657844] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:05.061 [2024-07-22 15:59:07.657932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57048 ] 00:22:05.061 [2024-07-22 15:59:07.794029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.061 [2024-07-22 15:59:07.871529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.438 15:59:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:22:06.438 00:22:06.438 SPDK Configuration: 00:22:06.438 Core mask: 0x1 00:22:06.438 00:22:06.438 Accel Perf Configuration: 00:22:06.438 Workload Type: decompress 00:22:06.438 Transfer size: 111250 bytes 00:22:06.438 Vector count 1 00:22:06.438 Module: software 00:22:06.439 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:06.439 Queue depth: 32 00:22:06.439 Allocate depth: 32 00:22:06.439 # threads/core: 2 00:22:06.439 Run time: 1 seconds 00:22:06.439 Verify: Yes 00:22:06.439 00:22:06.439 Running for 1 seconds... 00:22:06.439 00:22:06.439 Core,Thread Transfers Bandwidth Failed Miscompares 00:22:06.439 ------------------------------------------------------------------------------------ 00:22:06.439 0,1 2112/s 87 MiB/s 0 0 00:22:06.439 0,0 2080/s 85 MiB/s 0 0 00:22:06.439 ==================================================================================== 00:22:06.439 Total 4192/s 444 MiB/s 0 0' 00:22:06.439 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.439 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.439 15:59:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:06.439 15:59:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:22:06.439 15:59:09 -- accel/accel.sh@12 -- # build_accel_config 00:22:06.439 15:59:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:22:06.439 15:59:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:06.439 15:59:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:06.439 15:59:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:22:06.439 15:59:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:22:06.439 15:59:09 -- accel/accel.sh@41 -- # local IFS=, 00:22:06.439 15:59:09 -- accel/accel.sh@42 -- # jq -r . 00:22:06.439 [2024-07-22 15:59:09.098060] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:06.439 [2024-07-22 15:59:09.098167] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57062 ] 00:22:06.439 [2024-07-22 15:59:09.238143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.698 [2024-07-22 15:59:09.306513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val= 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val= 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val= 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val=0x1 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val= 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val= 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val=decompress 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val= 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val=software 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@23 -- # accel_module=software 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val=32 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val=32 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val=2 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val=Yes 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val= 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:06.698 15:59:09 -- accel/accel.sh@21 -- # val= 00:22:06.698 15:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # IFS=: 00:22:06.698 15:59:09 -- accel/accel.sh@20 -- # read -r var val 00:22:08.073 15:59:10 -- accel/accel.sh@21 -- # val= 00:22:08.073 15:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # IFS=: 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # read -r var val 00:22:08.073 15:59:10 -- accel/accel.sh@21 -- # val= 00:22:08.073 15:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # IFS=: 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # read -r var val 00:22:08.073 15:59:10 -- accel/accel.sh@21 -- # val= 00:22:08.073 15:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # IFS=: 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # read -r var val 00:22:08.073 15:59:10 -- accel/accel.sh@21 -- # val= 00:22:08.073 15:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # IFS=: 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # read -r var val 00:22:08.073 15:59:10 -- accel/accel.sh@21 -- # val= 00:22:08.073 15:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # IFS=: 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # read -r var val 00:22:08.073 15:59:10 -- accel/accel.sh@21 -- # val= 00:22:08.073 15:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # IFS=: 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # read -r var val 00:22:08.073 15:59:10 -- accel/accel.sh@21 -- # val= 00:22:08.073 15:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # IFS=: 00:22:08.073 15:59:10 -- accel/accel.sh@20 -- # read -r var val 00:22:08.073 15:59:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:22:08.073 15:59:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:22:08.073 15:59:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:08.073 00:22:08.073 real 0m2.890s 00:22:08.073 user 0m2.529s 00:22:08.073 sys 0m0.154s 00:22:08.073 15:59:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.073 15:59:10 -- common/autotest_common.sh@10 -- # set +x 00:22:08.073 ************************************ 00:22:08.073 END TEST accel_deomp_full_mthread 00:22:08.073 ************************************ 00:22:08.073 15:59:10 -- accel/accel.sh@116 -- # [[ n == y ]] 00:22:08.073 15:59:10 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:22:08.073 15:59:10 -- accel/accel.sh@129 -- # build_accel_config 00:22:08.073 15:59:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:22:08.073 15:59:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:08.073 15:59:10 -- common/autotest_common.sh@10 -- # set +x 00:22:08.073 15:59:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:22:08.073 15:59:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:22:08.073 15:59:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:22:08.073 15:59:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:22:08.073 15:59:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:22:08.073 15:59:10 -- accel/accel.sh@41 -- # local IFS=, 00:22:08.073 15:59:10 -- accel/accel.sh@42 -- # jq -r . 00:22:08.073 ************************************ 00:22:08.073 START TEST accel_dif_functional_tests 00:22:08.073 ************************************ 00:22:08.073 15:59:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:22:08.073 [2024-07-22 15:59:10.641203] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:08.073 [2024-07-22 15:59:10.641330] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57103 ] 00:22:08.073 [2024-07-22 15:59:10.784092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:08.073 [2024-07-22 15:59:10.845544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.073 [2024-07-22 15:59:10.845661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.073 [2024-07-22 15:59:10.845666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.073 00:22:08.073 00:22:08.073 CUnit - A unit testing framework for C - Version 2.1-3 00:22:08.073 http://cunit.sourceforge.net/ 00:22:08.073 00:22:08.073 00:22:08.073 Suite: accel_dif 00:22:08.073 Test: verify: DIF generated, GUARD check ...passed 00:22:08.073 Test: verify: DIF generated, APPTAG check ...passed 00:22:08.073 Test: verify: DIF generated, REFTAG check ...passed 00:22:08.073 Test: verify: DIF not generated, GUARD check ...passed 00:22:08.073 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 15:59:10.899677] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:22:08.073 [2024-07-22 15:59:10.899813] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:22:08.073 [2024-07-22 15:59:10.899855] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:22:08.073 passed 00:22:08.073 Test: verify: DIF not generated, REFTAG check ...passed 00:22:08.073 Test: verify: APPTAG correct, APPTAG check ...passed 00:22:08.073 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:22:08.073 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-22 15:59:10.899889] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:22:08.073 [2024-07-22 15:59:10.899916] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:22:08.073 [2024-07-22 15:59:10.899989] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:22:08.073 [2024-07-22 15:59:10.900052] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:22:08.073 passed 00:22:08.073 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:22:08.073 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:22:08.073 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 15:59:10.900436] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:22:08.073 passed 00:22:08.073 Test: generate copy: DIF generated, GUARD check ...passed 00:22:08.073 Test: generate copy: DIF generated, APTTAG check ...passed 00:22:08.073 Test: generate copy: DIF generated, REFTAG check ...passed 00:22:08.073 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:22:08.073 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:22:08.073 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:22:08.073 Test: generate copy: iovecs-len validate ...[2024-07-22 15:59:10.901001] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:22:08.073 passed 00:22:08.073 Test: generate copy: buffer alignment validate ...passed 00:22:08.073 00:22:08.073 Run Summary: Type Total Ran Passed Failed Inactive 00:22:08.073 suites 1 1 n/a 0 0 00:22:08.073 tests 20 20 20 0 0 00:22:08.073 asserts 204 204 204 0 n/a 00:22:08.073 00:22:08.073 Elapsed time = 0.003 seconds 00:22:08.332 00:22:08.332 real 0m0.502s 00:22:08.332 user 0m0.568s 00:22:08.332 sys 0m0.116s 00:22:08.332 ************************************ 00:22:08.332 END TEST accel_dif_functional_tests 00:22:08.332 ************************************ 00:22:08.332 15:59:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.332 15:59:11 -- common/autotest_common.sh@10 -- # set +x 00:22:08.332 ************************************ 00:22:08.332 END TEST accel 00:22:08.332 ************************************ 00:22:08.332 00:22:08.332 real 1m0.294s 00:22:08.332 user 1m5.505s 00:22:08.332 sys 0m4.292s 00:22:08.332 15:59:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.332 15:59:11 -- common/autotest_common.sh@10 -- # set +x 00:22:08.332 15:59:11 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:22:08.332 15:59:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:08.332 15:59:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:08.332 15:59:11 -- common/autotest_common.sh@10 -- # set +x 00:22:08.332 ************************************ 00:22:08.332 START TEST accel_rpc 00:22:08.332 ************************************ 00:22:08.332 15:59:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:22:08.591 * Looking for test storage... 00:22:08.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:22:08.591 15:59:11 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:22:08.591 15:59:11 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57165 00:22:08.591 15:59:11 -- accel/accel_rpc.sh@15 -- # waitforlisten 57165 00:22:08.591 15:59:11 -- common/autotest_common.sh@819 -- # '[' -z 57165 ']' 00:22:08.591 15:59:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.591 15:59:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:08.591 15:59:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.591 15:59:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:08.591 15:59:11 -- common/autotest_common.sh@10 -- # set +x 00:22:08.591 15:59:11 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:22:08.591 [2024-07-22 15:59:11.304279] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:08.591 [2024-07-22 15:59:11.304379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57165 ] 00:22:08.591 [2024-07-22 15:59:11.445025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.864 [2024-07-22 15:59:11.512126] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:08.864 [2024-07-22 15:59:11.512311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.443 15:59:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:09.443 15:59:12 -- common/autotest_common.sh@852 -- # return 0 00:22:09.443 15:59:12 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:22:09.443 15:59:12 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:22:09.443 15:59:12 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:22:09.443 15:59:12 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:22:09.443 15:59:12 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:22:09.443 15:59:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:09.443 15:59:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:09.443 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 ************************************ 00:22:09.443 START TEST accel_assign_opcode 00:22:09.443 ************************************ 00:22:09.702 15:59:12 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:22:09.702 15:59:12 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:22:09.702 15:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.702 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:09.702 [2024-07-22 15:59:12.312816] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:22:09.702 15:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.702 15:59:12 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:22:09.702 15:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.702 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:09.702 [2024-07-22 15:59:12.320804] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:22:09.702 15:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.702 15:59:12 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:22:09.702 15:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.702 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:09.702 15:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.702 15:59:12 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:22:09.702 15:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.702 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:09.702 15:59:12 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:22:09.702 15:59:12 -- accel/accel_rpc.sh@42 -- # grep software 00:22:09.702 15:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.702 software 00:22:09.702 00:22:09.702 real 0m0.195s 00:22:09.702 user 0m0.048s 00:22:09.702 sys 0m0.014s 00:22:09.702 15:59:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:09.702 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:09.702 ************************************ 00:22:09.702 END TEST accel_assign_opcode 00:22:09.702 ************************************ 00:22:09.702 15:59:12 -- accel/accel_rpc.sh@55 -- # killprocess 57165 00:22:09.702 15:59:12 -- common/autotest_common.sh@926 -- # '[' -z 57165 ']' 00:22:09.702 15:59:12 -- common/autotest_common.sh@930 -- # kill -0 57165 00:22:09.702 15:59:12 -- common/autotest_common.sh@931 -- # uname 00:22:09.702 15:59:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:09.702 15:59:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57165 00:22:09.702 15:59:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:09.702 killing process with pid 57165 00:22:09.702 15:59:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:09.702 15:59:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57165' 00:22:09.702 15:59:12 -- common/autotest_common.sh@945 -- # kill 57165 00:22:09.702 15:59:12 -- common/autotest_common.sh@950 -- # wait 57165 00:22:10.275 00:22:10.275 real 0m1.677s 00:22:10.275 user 0m1.904s 00:22:10.275 sys 0m0.324s 00:22:10.275 15:59:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.275 ************************************ 00:22:10.275 END TEST accel_rpc 00:22:10.275 ************************************ 00:22:10.275 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.275 15:59:12 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:22:10.275 15:59:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:10.275 15:59:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:10.275 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.275 ************************************ 00:22:10.275 START TEST app_cmdline 00:22:10.275 ************************************ 00:22:10.275 15:59:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:22:10.275 * Looking for test storage... 00:22:10.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:22:10.275 15:59:12 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:22:10.275 15:59:12 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57253 00:22:10.275 15:59:12 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:22:10.275 15:59:12 -- app/cmdline.sh@18 -- # waitforlisten 57253 00:22:10.276 15:59:12 -- common/autotest_common.sh@819 -- # '[' -z 57253 ']' 00:22:10.276 15:59:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.276 15:59:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:10.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.276 15:59:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.276 15:59:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:10.276 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.276 [2024-07-22 15:59:13.029673] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:10.276 [2024-07-22 15:59:13.029781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57253 ] 00:22:10.539 [2024-07-22 15:59:13.166969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.539 [2024-07-22 15:59:13.247055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:10.539 [2024-07-22 15:59:13.247251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.475 15:59:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:11.475 15:59:13 -- common/autotest_common.sh@852 -- # return 0 00:22:11.475 15:59:13 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:22:11.475 { 00:22:11.475 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:22:11.475 "fields": { 00:22:11.475 "major": 24, 00:22:11.475 "minor": 1, 00:22:11.475 "patch": 1, 00:22:11.475 "suffix": "-pre", 00:22:11.475 "commit": "dbef7efac" 00:22:11.475 } 00:22:11.475 } 00:22:11.475 15:59:14 -- app/cmdline.sh@22 -- # expected_methods=() 00:22:11.475 15:59:14 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:22:11.475 15:59:14 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:22:11.475 15:59:14 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:22:11.475 15:59:14 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:22:11.475 15:59:14 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:22:11.475 15:59:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.475 15:59:14 -- common/autotest_common.sh@10 -- # set +x 00:22:11.475 15:59:14 -- app/cmdline.sh@26 -- # sort 00:22:11.475 15:59:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.475 15:59:14 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:22:11.475 15:59:14 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:22:11.475 15:59:14 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:22:11.475 15:59:14 -- common/autotest_common.sh@640 -- # local es=0 00:22:11.475 15:59:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:22:11.475 15:59:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.475 15:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:11.475 15:59:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.475 15:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:11.475 15:59:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.475 15:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:11.475 15:59:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.475 15:59:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:11.475 15:59:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:22:11.734 request: 00:22:11.734 { 00:22:11.734 "method": "env_dpdk_get_mem_stats", 00:22:11.734 "req_id": 1 00:22:11.734 } 00:22:11.734 Got JSON-RPC error response 00:22:11.734 response: 00:22:11.734 { 00:22:11.734 "code": -32601, 00:22:11.734 "message": "Method not found" 00:22:11.734 } 00:22:11.734 15:59:14 -- common/autotest_common.sh@643 -- # es=1 00:22:11.734 15:59:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:11.734 15:59:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:11.734 15:59:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:11.734 15:59:14 -- app/cmdline.sh@1 -- # killprocess 57253 00:22:11.734 15:59:14 -- common/autotest_common.sh@926 -- # '[' -z 57253 ']' 00:22:11.734 15:59:14 -- common/autotest_common.sh@930 -- # kill -0 57253 00:22:11.993 15:59:14 -- common/autotest_common.sh@931 -- # uname 00:22:11.993 15:59:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:11.993 15:59:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57253 00:22:11.993 15:59:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:11.993 killing process with pid 57253 00:22:11.993 15:59:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:11.993 15:59:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57253' 00:22:11.993 15:59:14 -- common/autotest_common.sh@945 -- # kill 57253 00:22:11.993 15:59:14 -- common/autotest_common.sh@950 -- # wait 57253 00:22:12.252 00:22:12.252 real 0m2.006s 00:22:12.252 user 0m2.650s 00:22:12.252 sys 0m0.374s 00:22:12.252 15:59:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:12.252 ************************************ 00:22:12.252 END TEST app_cmdline 00:22:12.252 ************************************ 00:22:12.252 15:59:14 -- common/autotest_common.sh@10 -- # set +x 00:22:12.252 15:59:14 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:22:12.252 15:59:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:12.252 15:59:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:12.252 15:59:14 -- common/autotest_common.sh@10 -- # set +x 00:22:12.252 ************************************ 00:22:12.252 START TEST version 00:22:12.252 ************************************ 00:22:12.252 15:59:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:22:12.252 * Looking for test storage... 00:22:12.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:22:12.252 15:59:15 -- app/version.sh@17 -- # get_header_version major 00:22:12.252 15:59:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:12.252 15:59:15 -- app/version.sh@14 -- # cut -f2 00:22:12.252 15:59:15 -- app/version.sh@14 -- # tr -d '"' 00:22:12.252 15:59:15 -- app/version.sh@17 -- # major=24 00:22:12.252 15:59:15 -- app/version.sh@18 -- # get_header_version minor 00:22:12.252 15:59:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:12.252 15:59:15 -- app/version.sh@14 -- # cut -f2 00:22:12.252 15:59:15 -- app/version.sh@14 -- # tr -d '"' 00:22:12.252 15:59:15 -- app/version.sh@18 -- # minor=1 00:22:12.252 15:59:15 -- app/version.sh@19 -- # get_header_version patch 00:22:12.252 15:59:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:12.252 15:59:15 -- app/version.sh@14 -- # cut -f2 00:22:12.252 15:59:15 -- app/version.sh@14 -- # tr -d '"' 00:22:12.252 15:59:15 -- app/version.sh@19 -- # patch=1 00:22:12.252 15:59:15 -- app/version.sh@20 -- # get_header_version suffix 00:22:12.252 15:59:15 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:12.252 15:59:15 -- app/version.sh@14 -- # cut -f2 00:22:12.252 15:59:15 -- app/version.sh@14 -- # tr -d '"' 00:22:12.252 15:59:15 -- app/version.sh@20 -- # suffix=-pre 00:22:12.252 15:59:15 -- app/version.sh@22 -- # version=24.1 00:22:12.252 15:59:15 -- app/version.sh@25 -- # (( patch != 0 )) 00:22:12.252 15:59:15 -- app/version.sh@25 -- # version=24.1.1 00:22:12.252 15:59:15 -- app/version.sh@28 -- # version=24.1.1rc0 00:22:12.252 15:59:15 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:22:12.252 15:59:15 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:22:12.252 15:59:15 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:22:12.252 15:59:15 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:22:12.252 00:22:12.252 real 0m0.141s 00:22:12.252 user 0m0.087s 00:22:12.252 sys 0m0.086s 00:22:12.252 15:59:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:12.252 ************************************ 00:22:12.252 END TEST version 00:22:12.252 ************************************ 00:22:12.252 15:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:12.513 15:59:15 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:22:12.513 15:59:15 -- spdk/autotest.sh@204 -- # uname -s 00:22:12.513 15:59:15 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:22:12.513 15:59:15 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:22:12.513 15:59:15 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:22:12.513 15:59:15 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:22:12.513 15:59:15 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:22:12.513 15:59:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:12.513 15:59:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:12.513 15:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:12.513 ************************************ 00:22:12.513 START TEST spdk_dd 00:22:12.513 ************************************ 00:22:12.513 15:59:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:22:12.513 * Looking for test storage... 00:22:12.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:22:12.513 15:59:15 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.513 15:59:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.513 15:59:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.513 15:59:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.513 15:59:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.513 15:59:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.513 15:59:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.513 15:59:15 -- paths/export.sh@5 -- # export PATH 00:22:12.513 15:59:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.513 15:59:15 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:12.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:12.772 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:12.772 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:12.772 15:59:15 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:22:12.772 15:59:15 -- dd/dd.sh@11 -- # nvme_in_userspace 00:22:12.772 15:59:15 -- scripts/common.sh@311 -- # local bdf bdfs 00:22:12.772 15:59:15 -- scripts/common.sh@312 -- # local nvmes 00:22:12.772 15:59:15 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:22:12.772 15:59:15 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:12.772 15:59:15 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:22:12.772 15:59:15 -- scripts/common.sh@297 -- # local bdf= 00:22:12.772 15:59:15 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:22:12.772 15:59:15 -- scripts/common.sh@232 -- # local class 00:22:12.772 15:59:15 -- scripts/common.sh@233 -- # local subclass 00:22:12.772 15:59:15 -- scripts/common.sh@234 -- # local progif 00:22:12.772 15:59:15 -- scripts/common.sh@235 -- # printf %02x 1 00:22:12.772 15:59:15 -- scripts/common.sh@235 -- # class=01 00:22:12.772 15:59:15 -- scripts/common.sh@236 -- # printf %02x 8 00:22:12.772 15:59:15 -- scripts/common.sh@236 -- # subclass=08 00:22:12.772 15:59:15 -- scripts/common.sh@237 -- # printf %02x 2 00:22:12.772 15:59:15 -- scripts/common.sh@237 -- # progif=02 00:22:12.772 15:59:15 -- scripts/common.sh@239 -- # hash lspci 00:22:12.772 15:59:15 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:22:12.772 15:59:15 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:22:12.772 15:59:15 -- scripts/common.sh@242 -- # grep -i -- -p02 00:22:12.772 15:59:15 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:12.772 15:59:15 -- scripts/common.sh@244 -- # tr -d '"' 00:22:12.772 15:59:15 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:12.772 15:59:15 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:22:12.772 15:59:15 -- scripts/common.sh@15 -- # local i 00:22:12.772 15:59:15 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:22:12.772 15:59:15 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:22:12.772 15:59:15 -- scripts/common.sh@24 -- # return 0 00:22:12.772 15:59:15 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:22:12.772 15:59:15 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:12.772 15:59:15 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:22:12.772 15:59:15 -- scripts/common.sh@15 -- # local i 00:22:12.772 15:59:15 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:22:12.772 15:59:15 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:22:12.772 15:59:15 -- scripts/common.sh@24 -- # return 0 00:22:12.772 15:59:15 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:22:12.772 15:59:15 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:22:12.772 15:59:15 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:22:12.772 15:59:15 -- scripts/common.sh@322 -- # uname -s 00:22:12.772 15:59:15 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:22:12.772 15:59:15 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:22:12.772 15:59:15 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:22:12.772 15:59:15 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:22:12.772 15:59:15 -- scripts/common.sh@322 -- # uname -s 00:22:12.772 15:59:15 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:22:12.772 15:59:15 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:22:12.772 15:59:15 -- scripts/common.sh@327 -- # (( 2 )) 00:22:12.772 15:59:15 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:22:12.772 15:59:15 -- dd/dd.sh@13 -- # check_liburing 00:22:12.772 15:59:15 -- dd/common.sh@139 -- # local lib so 00:22:12.773 15:59:15 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:22:12.773 15:59:15 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:12.773 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:22:12.773 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:13.047 15:59:15 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:22:13.047 15:59:15 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:22:13.047 * spdk_dd linked to liburing 00:22:13.047 15:59:15 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:22:13.047 15:59:15 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:22:13.047 15:59:15 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:22:13.047 15:59:15 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:22:13.047 15:59:15 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:22:13.047 15:59:15 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:22:13.047 15:59:15 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:22:13.047 15:59:15 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:22:13.047 15:59:15 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:22:13.047 15:59:15 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:22:13.047 15:59:15 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:22:13.047 15:59:15 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:22:13.047 15:59:15 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:22:13.047 15:59:15 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:22:13.047 15:59:15 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:22:13.047 15:59:15 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:22:13.047 15:59:15 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:22:13.047 15:59:15 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:22:13.047 15:59:15 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:22:13.047 15:59:15 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:22:13.047 15:59:15 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:13.047 15:59:15 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:22:13.047 15:59:15 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:22:13.047 15:59:15 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:22:13.047 15:59:15 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:22:13.047 15:59:15 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:22:13.047 15:59:15 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:22:13.047 15:59:15 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:22:13.047 15:59:15 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:22:13.047 15:59:15 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:22:13.047 15:59:15 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:22:13.047 15:59:15 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:22:13.047 15:59:15 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:22:13.047 15:59:15 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:22:13.047 15:59:15 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:22:13.047 15:59:15 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:22:13.047 15:59:15 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:22:13.047 15:59:15 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:22:13.047 15:59:15 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:22:13.047 15:59:15 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:22:13.047 15:59:15 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:22:13.047 15:59:15 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:22:13.047 15:59:15 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:22:13.047 15:59:15 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:22:13.047 15:59:15 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:22:13.047 15:59:15 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:22:13.047 15:59:15 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:22:13.047 15:59:15 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:22:13.047 15:59:15 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:22:13.047 15:59:15 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:22:13.047 15:59:15 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:22:13.047 15:59:15 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:22:13.047 15:59:15 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:22:13.047 15:59:15 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:22:13.047 15:59:15 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:22:13.047 15:59:15 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:22:13.047 15:59:15 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:22:13.047 15:59:15 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:22:13.047 15:59:15 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:22:13.047 15:59:15 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:22:13.047 15:59:15 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:22:13.047 15:59:15 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:22:13.047 15:59:15 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:22:13.047 15:59:15 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:22:13.047 15:59:15 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:22:13.047 15:59:15 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:22:13.047 15:59:15 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:22:13.047 15:59:15 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:22:13.047 15:59:15 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:22:13.047 15:59:15 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:22:13.047 15:59:15 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:22:13.047 15:59:15 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:22:13.047 15:59:15 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:22:13.047 15:59:15 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:22:13.047 15:59:15 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:22:13.047 15:59:15 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:22:13.047 15:59:15 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:22:13.047 15:59:15 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:22:13.047 15:59:15 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:22:13.047 15:59:15 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:22:13.047 15:59:15 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:22:13.047 15:59:15 -- dd/common.sh@149 -- # [[ y != y ]] 00:22:13.047 15:59:15 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:22:13.047 15:59:15 -- dd/common.sh@156 -- # export liburing_in_use=1 00:22:13.047 15:59:15 -- dd/common.sh@156 -- # liburing_in_use=1 00:22:13.047 15:59:15 -- dd/common.sh@157 -- # return 0 00:22:13.047 15:59:15 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:22:13.047 15:59:15 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:22:13.047 15:59:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:22:13.047 15:59:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:13.047 15:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:13.047 ************************************ 00:22:13.047 START TEST spdk_dd_basic_rw 00:22:13.047 ************************************ 00:22:13.047 15:59:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:22:13.047 * Looking for test storage... 00:22:13.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:22:13.047 15:59:15 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:13.047 15:59:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.047 15:59:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.047 15:59:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.047 15:59:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.048 15:59:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.048 15:59:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.048 15:59:15 -- paths/export.sh@5 -- # export PATH 00:22:13.048 15:59:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.048 15:59:15 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:22:13.048 15:59:15 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:22:13.048 15:59:15 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:22:13.048 15:59:15 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:22:13.048 15:59:15 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:22:13.048 15:59:15 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:22:13.048 15:59:15 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:22:13.048 15:59:15 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:13.048 15:59:15 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:13.048 15:59:15 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:22:13.048 15:59:15 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:22:13.048 15:59:15 -- dd/common.sh@126 -- # mapfile -t id 00:22:13.048 15:59:15 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:22:13.308 15:59:15 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 100 Data Units Written: 7 Host Read Commands: 2132 Host Write Commands: 92 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:22:13.308 15:59:15 -- dd/common.sh@130 -- # lbaf=04 00:22:13.309 15:59:15 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 100 Data Units Written: 7 Host Read Commands: 2132 Host Write Commands: 92 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:22:13.309 15:59:15 -- dd/common.sh@132 -- # lbaf=4096 00:22:13.309 15:59:15 -- dd/common.sh@134 -- # echo 4096 00:22:13.309 15:59:15 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:22:13.309 15:59:15 -- dd/basic_rw.sh@96 -- # : 00:22:13.309 15:59:15 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:22:13.309 15:59:15 -- dd/basic_rw.sh@96 -- # gen_conf 00:22:13.309 15:59:15 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:22:13.309 15:59:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:13.309 15:59:15 -- dd/common.sh@31 -- # xtrace_disable 00:22:13.309 15:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:13.309 15:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:13.309 ************************************ 00:22:13.309 START TEST dd_bs_lt_native_bs 00:22:13.309 ************************************ 00:22:13.309 15:59:15 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:22:13.309 15:59:15 -- common/autotest_common.sh@640 -- # local es=0 00:22:13.309 15:59:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:22:13.309 15:59:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:13.309 15:59:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:13.309 15:59:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:13.309 15:59:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:13.309 15:59:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:13.309 15:59:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:13.309 15:59:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:13.309 15:59:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:13.309 15:59:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:22:13.309 { 00:22:13.309 "subsystems": [ 00:22:13.309 { 00:22:13.309 "subsystem": "bdev", 00:22:13.309 "config": [ 00:22:13.309 { 00:22:13.309 "params": { 00:22:13.309 "trtype": "pcie", 00:22:13.309 "traddr": "0000:00:06.0", 00:22:13.309 "name": "Nvme0" 00:22:13.309 }, 00:22:13.309 "method": "bdev_nvme_attach_controller" 00:22:13.309 }, 00:22:13.309 { 00:22:13.309 "method": "bdev_wait_for_examine" 00:22:13.309 } 00:22:13.309 ] 00:22:13.309 } 00:22:13.309 ] 00:22:13.309 } 00:22:13.309 [2024-07-22 15:59:15.993384] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:13.309 [2024-07-22 15:59:15.993476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57574 ] 00:22:13.309 [2024-07-22 15:59:16.132101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.568 [2024-07-22 15:59:16.200481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.568 [2024-07-22 15:59:16.318076] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:22:13.568 [2024-07-22 15:59:16.318148] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:13.568 [2024-07-22 15:59:16.394779] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:13.827 15:59:16 -- common/autotest_common.sh@643 -- # es=234 00:22:13.827 15:59:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:13.827 15:59:16 -- common/autotest_common.sh@652 -- # es=106 00:22:13.827 15:59:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:13.827 15:59:16 -- common/autotest_common.sh@660 -- # es=1 00:22:13.827 15:59:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:13.827 00:22:13.827 real 0m0.574s 00:22:13.827 user 0m0.420s 00:22:13.827 sys 0m0.108s 00:22:13.827 15:59:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.827 ************************************ 00:22:13.827 END TEST dd_bs_lt_native_bs 00:22:13.827 ************************************ 00:22:13.827 15:59:16 -- common/autotest_common.sh@10 -- # set +x 00:22:13.827 15:59:16 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:22:13.827 15:59:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:13.827 15:59:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:13.827 15:59:16 -- common/autotest_common.sh@10 -- # set +x 00:22:13.827 ************************************ 00:22:13.827 START TEST dd_rw 00:22:13.827 ************************************ 00:22:13.827 15:59:16 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:22:13.827 15:59:16 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:22:13.827 15:59:16 -- dd/basic_rw.sh@12 -- # local count size 00:22:13.827 15:59:16 -- dd/basic_rw.sh@13 -- # local qds bss 00:22:13.827 15:59:16 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:22:13.827 15:59:16 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:22:13.827 15:59:16 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:22:13.827 15:59:16 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:22:13.827 15:59:16 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:22:13.827 15:59:16 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:22:13.827 15:59:16 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:22:13.827 15:59:16 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:22:13.827 15:59:16 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:22:13.827 15:59:16 -- dd/basic_rw.sh@23 -- # count=15 00:22:13.827 15:59:16 -- dd/basic_rw.sh@24 -- # count=15 00:22:13.827 15:59:16 -- dd/basic_rw.sh@25 -- # size=61440 00:22:13.827 15:59:16 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:22:13.827 15:59:16 -- dd/common.sh@98 -- # xtrace_disable 00:22:13.827 15:59:16 -- common/autotest_common.sh@10 -- # set +x 00:22:14.394 15:59:17 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:22:14.394 15:59:17 -- dd/basic_rw.sh@30 -- # gen_conf 00:22:14.394 15:59:17 -- dd/common.sh@31 -- # xtrace_disable 00:22:14.394 15:59:17 -- common/autotest_common.sh@10 -- # set +x 00:22:14.653 [2024-07-22 15:59:17.304068] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:14.653 [2024-07-22 15:59:17.304167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57605 ] 00:22:14.653 { 00:22:14.653 "subsystems": [ 00:22:14.653 { 00:22:14.653 "subsystem": "bdev", 00:22:14.653 "config": [ 00:22:14.653 { 00:22:14.653 "params": { 00:22:14.653 "trtype": "pcie", 00:22:14.653 "traddr": "0000:00:06.0", 00:22:14.653 "name": "Nvme0" 00:22:14.653 }, 00:22:14.653 "method": "bdev_nvme_attach_controller" 00:22:14.653 }, 00:22:14.653 { 00:22:14.653 "method": "bdev_wait_for_examine" 00:22:14.653 } 00:22:14.653 ] 00:22:14.653 } 00:22:14.653 ] 00:22:14.653 } 00:22:14.653 [2024-07-22 15:59:17.441651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.653 [2024-07-22 15:59:17.509375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.170  Copying: 60/60 [kB] (average 19 MBps) 00:22:15.170 00:22:15.170 15:59:17 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:22:15.170 15:59:17 -- dd/basic_rw.sh@37 -- # gen_conf 00:22:15.170 15:59:17 -- dd/common.sh@31 -- # xtrace_disable 00:22:15.170 15:59:17 -- common/autotest_common.sh@10 -- # set +x 00:22:15.170 [2024-07-22 15:59:17.891221] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:15.170 [2024-07-22 15:59:17.891319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57623 ] 00:22:15.170 { 00:22:15.170 "subsystems": [ 00:22:15.170 { 00:22:15.170 "subsystem": "bdev", 00:22:15.170 "config": [ 00:22:15.170 { 00:22:15.170 "params": { 00:22:15.170 "trtype": "pcie", 00:22:15.170 "traddr": "0000:00:06.0", 00:22:15.170 "name": "Nvme0" 00:22:15.171 }, 00:22:15.171 "method": "bdev_nvme_attach_controller" 00:22:15.171 }, 00:22:15.171 { 00:22:15.171 "method": "bdev_wait_for_examine" 00:22:15.171 } 00:22:15.171 ] 00:22:15.171 } 00:22:15.171 ] 00:22:15.171 } 00:22:15.171 [2024-07-22 15:59:18.027717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.429 [2024-07-22 15:59:18.122856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.687  Copying: 60/60 [kB] (average 19 MBps) 00:22:15.687 00:22:15.687 15:59:18 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:15.687 15:59:18 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:22:15.687 15:59:18 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:22:15.687 15:59:18 -- dd/common.sh@11 -- # local nvme_ref= 00:22:15.687 15:59:18 -- dd/common.sh@12 -- # local size=61440 00:22:15.687 15:59:18 -- dd/common.sh@14 -- # local bs=1048576 00:22:15.687 15:59:18 -- dd/common.sh@15 -- # local count=1 00:22:15.687 15:59:18 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:22:15.687 15:59:18 -- dd/common.sh@18 -- # gen_conf 00:22:15.687 15:59:18 -- dd/common.sh@31 -- # xtrace_disable 00:22:15.687 15:59:18 -- common/autotest_common.sh@10 -- # set +x 00:22:15.687 [2024-07-22 15:59:18.505202] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:15.687 [2024-07-22 15:59:18.505301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57631 ] 00:22:15.687 { 00:22:15.687 "subsystems": [ 00:22:15.687 { 00:22:15.687 "subsystem": "bdev", 00:22:15.687 "config": [ 00:22:15.687 { 00:22:15.687 "params": { 00:22:15.687 "trtype": "pcie", 00:22:15.687 "traddr": "0000:00:06.0", 00:22:15.687 "name": "Nvme0" 00:22:15.687 }, 00:22:15.687 "method": "bdev_nvme_attach_controller" 00:22:15.687 }, 00:22:15.687 { 00:22:15.687 "method": "bdev_wait_for_examine" 00:22:15.687 } 00:22:15.687 ] 00:22:15.687 } 00:22:15.687 ] 00:22:15.687 } 00:22:15.946 [2024-07-22 15:59:18.642659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.946 [2024-07-22 15:59:18.701005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.205  Copying: 1024/1024 [kB] (average 1000 MBps) 00:22:16.205 00:22:16.205 15:59:19 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:22:16.205 15:59:19 -- dd/basic_rw.sh@23 -- # count=15 00:22:16.205 15:59:19 -- dd/basic_rw.sh@24 -- # count=15 00:22:16.205 15:59:19 -- dd/basic_rw.sh@25 -- # size=61440 00:22:16.205 15:59:19 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:22:16.205 15:59:19 -- dd/common.sh@98 -- # xtrace_disable 00:22:16.205 15:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.140 15:59:19 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:22:17.140 15:59:19 -- dd/basic_rw.sh@30 -- # gen_conf 00:22:17.140 15:59:19 -- dd/common.sh@31 -- # xtrace_disable 00:22:17.140 15:59:19 -- common/autotest_common.sh@10 -- # set +x 00:22:17.140 { 00:22:17.140 "subsystems": [ 00:22:17.140 { 00:22:17.140 "subsystem": "bdev", 00:22:17.140 "config": [ 00:22:17.140 { 00:22:17.140 "params": { 00:22:17.140 "trtype": "pcie", 00:22:17.140 "traddr": "0000:00:06.0", 00:22:17.140 "name": "Nvme0" 00:22:17.140 }, 00:22:17.140 "method": "bdev_nvme_attach_controller" 00:22:17.140 }, 00:22:17.140 { 00:22:17.140 "method": "bdev_wait_for_examine" 00:22:17.140 } 00:22:17.140 ] 00:22:17.140 } 00:22:17.140 ] 00:22:17.140 } 00:22:17.140 [2024-07-22 15:59:19.763667] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:17.140 [2024-07-22 15:59:19.763787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57659 ] 00:22:17.140 [2024-07-22 15:59:19.908619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.140 [2024-07-22 15:59:19.967187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.657  Copying: 60/60 [kB] (average 58 MBps) 00:22:17.657 00:22:17.657 15:59:20 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:22:17.657 15:59:20 -- dd/basic_rw.sh@37 -- # gen_conf 00:22:17.657 15:59:20 -- dd/common.sh@31 -- # xtrace_disable 00:22:17.657 15:59:20 -- common/autotest_common.sh@10 -- # set +x 00:22:17.657 [2024-07-22 15:59:20.349161] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:17.657 [2024-07-22 15:59:20.349257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57667 ] 00:22:17.657 { 00:22:17.657 "subsystems": [ 00:22:17.657 { 00:22:17.657 "subsystem": "bdev", 00:22:17.657 "config": [ 00:22:17.657 { 00:22:17.657 "params": { 00:22:17.657 "trtype": "pcie", 00:22:17.657 "traddr": "0000:00:06.0", 00:22:17.657 "name": "Nvme0" 00:22:17.657 }, 00:22:17.657 "method": "bdev_nvme_attach_controller" 00:22:17.657 }, 00:22:17.657 { 00:22:17.657 "method": "bdev_wait_for_examine" 00:22:17.657 } 00:22:17.657 ] 00:22:17.657 } 00:22:17.657 ] 00:22:17.657 } 00:22:17.657 [2024-07-22 15:59:20.487987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.915 [2024-07-22 15:59:20.556114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.179  Copying: 60/60 [kB] (average 58 MBps) 00:22:18.179 00:22:18.179 15:59:20 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:18.179 15:59:20 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:22:18.179 15:59:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:22:18.179 15:59:20 -- dd/common.sh@11 -- # local nvme_ref= 00:22:18.179 15:59:20 -- dd/common.sh@12 -- # local size=61440 00:22:18.179 15:59:20 -- dd/common.sh@14 -- # local bs=1048576 00:22:18.179 15:59:20 -- dd/common.sh@15 -- # local count=1 00:22:18.179 15:59:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:22:18.179 15:59:20 -- dd/common.sh@18 -- # gen_conf 00:22:18.179 15:59:20 -- dd/common.sh@31 -- # xtrace_disable 00:22:18.179 15:59:20 -- common/autotest_common.sh@10 -- # set +x 00:22:18.179 [2024-07-22 15:59:20.939650] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:18.179 [2024-07-22 15:59:20.939735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57686 ] 00:22:18.179 { 00:22:18.179 "subsystems": [ 00:22:18.179 { 00:22:18.179 "subsystem": "bdev", 00:22:18.179 "config": [ 00:22:18.179 { 00:22:18.179 "params": { 00:22:18.179 "trtype": "pcie", 00:22:18.179 "traddr": "0000:00:06.0", 00:22:18.179 "name": "Nvme0" 00:22:18.179 }, 00:22:18.179 "method": "bdev_nvme_attach_controller" 00:22:18.179 }, 00:22:18.179 { 00:22:18.179 "method": "bdev_wait_for_examine" 00:22:18.179 } 00:22:18.179 ] 00:22:18.179 } 00:22:18.179 ] 00:22:18.179 } 00:22:18.439 [2024-07-22 15:59:21.076449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.439 [2024-07-22 15:59:21.142993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.698  Copying: 1024/1024 [kB] (average 1000 MBps) 00:22:18.698 00:22:18.698 15:59:21 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:22:18.698 15:59:21 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:22:18.698 15:59:21 -- dd/basic_rw.sh@23 -- # count=7 00:22:18.698 15:59:21 -- dd/basic_rw.sh@24 -- # count=7 00:22:18.698 15:59:21 -- dd/basic_rw.sh@25 -- # size=57344 00:22:18.698 15:59:21 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:22:18.698 15:59:21 -- dd/common.sh@98 -- # xtrace_disable 00:22:18.698 15:59:21 -- common/autotest_common.sh@10 -- # set +x 00:22:19.265 15:59:22 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:22:19.265 15:59:22 -- dd/basic_rw.sh@30 -- # gen_conf 00:22:19.265 15:59:22 -- dd/common.sh@31 -- # xtrace_disable 00:22:19.265 15:59:22 -- common/autotest_common.sh@10 -- # set +x 00:22:19.523 [2024-07-22 15:59:22.158825] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:19.524 [2024-07-22 15:59:22.158914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57704 ] 00:22:19.524 { 00:22:19.524 "subsystems": [ 00:22:19.524 { 00:22:19.524 "subsystem": "bdev", 00:22:19.524 "config": [ 00:22:19.524 { 00:22:19.524 "params": { 00:22:19.524 "trtype": "pcie", 00:22:19.524 "traddr": "0000:00:06.0", 00:22:19.524 "name": "Nvme0" 00:22:19.524 }, 00:22:19.524 "method": "bdev_nvme_attach_controller" 00:22:19.524 }, 00:22:19.524 { 00:22:19.524 "method": "bdev_wait_for_examine" 00:22:19.524 } 00:22:19.524 ] 00:22:19.524 } 00:22:19.524 ] 00:22:19.524 } 00:22:19.524 [2024-07-22 15:59:22.293321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.524 [2024-07-22 15:59:22.361820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.041  Copying: 56/56 [kB] (average 27 MBps) 00:22:20.041 00:22:20.041 15:59:22 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:22:20.041 15:59:22 -- dd/basic_rw.sh@37 -- # gen_conf 00:22:20.041 15:59:22 -- dd/common.sh@31 -- # xtrace_disable 00:22:20.041 15:59:22 -- common/autotest_common.sh@10 -- # set +x 00:22:20.041 [2024-07-22 15:59:22.748589] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:20.041 [2024-07-22 15:59:22.748692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57717 ] 00:22:20.041 { 00:22:20.041 "subsystems": [ 00:22:20.041 { 00:22:20.041 "subsystem": "bdev", 00:22:20.041 "config": [ 00:22:20.041 { 00:22:20.041 "params": { 00:22:20.041 "trtype": "pcie", 00:22:20.041 "traddr": "0000:00:06.0", 00:22:20.041 "name": "Nvme0" 00:22:20.041 }, 00:22:20.041 "method": "bdev_nvme_attach_controller" 00:22:20.041 }, 00:22:20.041 { 00:22:20.041 "method": "bdev_wait_for_examine" 00:22:20.041 } 00:22:20.041 ] 00:22:20.041 } 00:22:20.041 ] 00:22:20.041 } 00:22:20.041 [2024-07-22 15:59:22.885983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.307 [2024-07-22 15:59:22.953966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.573  Copying: 56/56 [kB] (average 54 MBps) 00:22:20.573 00:22:20.573 15:59:23 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:20.573 15:59:23 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:22:20.573 15:59:23 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:22:20.573 15:59:23 -- dd/common.sh@11 -- # local nvme_ref= 00:22:20.573 15:59:23 -- dd/common.sh@12 -- # local size=57344 00:22:20.573 15:59:23 -- dd/common.sh@14 -- # local bs=1048576 00:22:20.573 15:59:23 -- dd/common.sh@15 -- # local count=1 00:22:20.573 15:59:23 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:22:20.573 15:59:23 -- dd/common.sh@18 -- # gen_conf 00:22:20.573 15:59:23 -- dd/common.sh@31 -- # xtrace_disable 00:22:20.573 15:59:23 -- common/autotest_common.sh@10 -- # set +x 00:22:20.573 { 00:22:20.573 "subsystems": [ 00:22:20.573 { 00:22:20.573 "subsystem": "bdev", 00:22:20.573 "config": [ 00:22:20.573 { 00:22:20.573 "params": { 00:22:20.573 "trtype": "pcie", 00:22:20.573 "traddr": "0000:00:06.0", 00:22:20.573 "name": "Nvme0" 00:22:20.573 }, 00:22:20.573 "method": "bdev_nvme_attach_controller" 00:22:20.573 }, 00:22:20.573 { 00:22:20.573 "method": "bdev_wait_for_examine" 00:22:20.573 } 00:22:20.573 ] 00:22:20.573 } 00:22:20.573 ] 00:22:20.573 } 00:22:20.573 [2024-07-22 15:59:23.348030] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:20.573 [2024-07-22 15:59:23.348601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57730 ] 00:22:20.831 [2024-07-22 15:59:23.485545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.831 [2024-07-22 15:59:23.555915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.089  Copying: 1024/1024 [kB] (average 1000 MBps) 00:22:21.089 00:22:21.089 15:59:23 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:22:21.089 15:59:23 -- dd/basic_rw.sh@23 -- # count=7 00:22:21.089 15:59:23 -- dd/basic_rw.sh@24 -- # count=7 00:22:21.089 15:59:23 -- dd/basic_rw.sh@25 -- # size=57344 00:22:21.089 15:59:23 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:22:21.089 15:59:23 -- dd/common.sh@98 -- # xtrace_disable 00:22:21.089 15:59:23 -- common/autotest_common.sh@10 -- # set +x 00:22:21.658 15:59:24 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:22:21.658 15:59:24 -- dd/basic_rw.sh@30 -- # gen_conf 00:22:21.658 15:59:24 -- dd/common.sh@31 -- # xtrace_disable 00:22:21.658 15:59:24 -- common/autotest_common.sh@10 -- # set +x 00:22:21.917 [2024-07-22 15:59:24.526911] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:21.917 [2024-07-22 15:59:24.527052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57748 ] 00:22:21.917 { 00:22:21.917 "subsystems": [ 00:22:21.917 { 00:22:21.917 "subsystem": "bdev", 00:22:21.917 "config": [ 00:22:21.917 { 00:22:21.917 "params": { 00:22:21.917 "trtype": "pcie", 00:22:21.917 "traddr": "0000:00:06.0", 00:22:21.917 "name": "Nvme0" 00:22:21.917 }, 00:22:21.917 "method": "bdev_nvme_attach_controller" 00:22:21.917 }, 00:22:21.917 { 00:22:21.917 "method": "bdev_wait_for_examine" 00:22:21.917 } 00:22:21.917 ] 00:22:21.917 } 00:22:21.917 ] 00:22:21.917 } 00:22:21.917 [2024-07-22 15:59:24.665142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.917 [2024-07-22 15:59:24.734131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.460  Copying: 56/56 [kB] (average 54 MBps) 00:22:22.460 00:22:22.460 15:59:25 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:22:22.460 15:59:25 -- dd/basic_rw.sh@37 -- # gen_conf 00:22:22.460 15:59:25 -- dd/common.sh@31 -- # xtrace_disable 00:22:22.460 15:59:25 -- common/autotest_common.sh@10 -- # set +x 00:22:22.460 [2024-07-22 15:59:25.120648] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:22.460 [2024-07-22 15:59:25.120742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57766 ] 00:22:22.460 { 00:22:22.460 "subsystems": [ 00:22:22.460 { 00:22:22.460 "subsystem": "bdev", 00:22:22.460 "config": [ 00:22:22.460 { 00:22:22.460 "params": { 00:22:22.460 "trtype": "pcie", 00:22:22.460 "traddr": "0000:00:06.0", 00:22:22.460 "name": "Nvme0" 00:22:22.460 }, 00:22:22.460 "method": "bdev_nvme_attach_controller" 00:22:22.460 }, 00:22:22.460 { 00:22:22.460 "method": "bdev_wait_for_examine" 00:22:22.460 } 00:22:22.460 ] 00:22:22.460 } 00:22:22.460 ] 00:22:22.460 } 00:22:22.460 [2024-07-22 15:59:25.260255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.719 [2024-07-22 15:59:25.329714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.978  Copying: 56/56 [kB] (average 54 MBps) 00:22:22.978 00:22:22.978 15:59:25 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:22.978 15:59:25 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:22:22.978 15:59:25 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:22:22.978 15:59:25 -- dd/common.sh@11 -- # local nvme_ref= 00:22:22.978 15:59:25 -- dd/common.sh@12 -- # local size=57344 00:22:22.978 15:59:25 -- dd/common.sh@14 -- # local bs=1048576 00:22:22.978 15:59:25 -- dd/common.sh@15 -- # local count=1 00:22:22.978 15:59:25 -- dd/common.sh@18 -- # gen_conf 00:22:22.978 15:59:25 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:22:22.978 15:59:25 -- dd/common.sh@31 -- # xtrace_disable 00:22:22.978 15:59:25 -- common/autotest_common.sh@10 -- # set +x 00:22:22.978 { 00:22:22.978 "subsystems": [ 00:22:22.978 { 00:22:22.978 "subsystem": "bdev", 00:22:22.978 "config": [ 00:22:22.978 { 00:22:22.978 "params": { 00:22:22.978 "trtype": "pcie", 00:22:22.978 "traddr": "0000:00:06.0", 00:22:22.978 "name": "Nvme0" 00:22:22.978 }, 00:22:22.978 "method": "bdev_nvme_attach_controller" 00:22:22.978 }, 00:22:22.978 { 00:22:22.978 "method": "bdev_wait_for_examine" 00:22:22.978 } 00:22:22.978 ] 00:22:22.978 } 00:22:22.978 ] 00:22:22.978 } 00:22:22.978 [2024-07-22 15:59:25.735270] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:22.978 [2024-07-22 15:59:25.735417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57774 ] 00:22:23.236 [2024-07-22 15:59:25.880638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.236 [2024-07-22 15:59:25.980350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.495  Copying: 1024/1024 [kB] (average 1000 MBps) 00:22:23.495 00:22:23.495 15:59:26 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:22:23.495 15:59:26 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:22:23.495 15:59:26 -- dd/basic_rw.sh@23 -- # count=3 00:22:23.495 15:59:26 -- dd/basic_rw.sh@24 -- # count=3 00:22:23.495 15:59:26 -- dd/basic_rw.sh@25 -- # size=49152 00:22:23.495 15:59:26 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:22:23.495 15:59:26 -- dd/common.sh@98 -- # xtrace_disable 00:22:23.495 15:59:26 -- common/autotest_common.sh@10 -- # set +x 00:22:24.061 15:59:26 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:22:24.061 15:59:26 -- dd/basic_rw.sh@30 -- # gen_conf 00:22:24.061 15:59:26 -- dd/common.sh@31 -- # xtrace_disable 00:22:24.061 15:59:26 -- common/autotest_common.sh@10 -- # set +x 00:22:24.061 [2024-07-22 15:59:26.894320] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:24.061 [2024-07-22 15:59:26.894423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57798 ] 00:22:24.061 { 00:22:24.061 "subsystems": [ 00:22:24.061 { 00:22:24.061 "subsystem": "bdev", 00:22:24.061 "config": [ 00:22:24.061 { 00:22:24.061 "params": { 00:22:24.061 "trtype": "pcie", 00:22:24.061 "traddr": "0000:00:06.0", 00:22:24.061 "name": "Nvme0" 00:22:24.061 }, 00:22:24.061 "method": "bdev_nvme_attach_controller" 00:22:24.061 }, 00:22:24.061 { 00:22:24.061 "method": "bdev_wait_for_examine" 00:22:24.061 } 00:22:24.061 ] 00:22:24.061 } 00:22:24.061 ] 00:22:24.061 } 00:22:24.319 [2024-07-22 15:59:27.025937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.319 [2024-07-22 15:59:27.083530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.834  Copying: 48/48 [kB] (average 46 MBps) 00:22:24.834 00:22:24.834 15:59:27 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:22:24.834 15:59:27 -- dd/basic_rw.sh@37 -- # gen_conf 00:22:24.834 15:59:27 -- dd/common.sh@31 -- # xtrace_disable 00:22:24.834 15:59:27 -- common/autotest_common.sh@10 -- # set +x 00:22:24.834 [2024-07-22 15:59:27.509045] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:24.834 [2024-07-22 15:59:27.509863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:{ 00:22:24.834 "subsystems": [ 00:22:24.834 { 00:22:24.834 "subsystem": "bdev", 00:22:24.834 "config": [ 00:22:24.834 { 00:22:24.834 "params": { 00:22:24.834 "trtype": "pcie", 00:22:24.834 "traddr": "0000:00:06.0", 00:22:24.834 "name": "Nvme0" 00:22:24.834 }, 00:22:24.834 "method": "bdev_nvme_attach_controller" 00:22:24.834 }, 00:22:24.834 { 00:22:24.834 "method": "bdev_wait_for_examine" 00:22:24.834 } 00:22:24.834 ] 00:22:24.834 } 00:22:24.834 ] 00:22:24.834 } 00:22:24.834 6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57810 ] 00:22:24.834 [2024-07-22 15:59:27.649754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.092 [2024-07-22 15:59:27.733422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.349  Copying: 48/48 [kB] (average 46 MBps) 00:22:25.349 00:22:25.349 15:59:28 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:25.349 15:59:28 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:22:25.350 15:59:28 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:22:25.350 15:59:28 -- dd/common.sh@11 -- # local nvme_ref= 00:22:25.350 15:59:28 -- dd/common.sh@12 -- # local size=49152 00:22:25.350 15:59:28 -- dd/common.sh@14 -- # local bs=1048576 00:22:25.350 15:59:28 -- dd/common.sh@15 -- # local count=1 00:22:25.350 15:59:28 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:22:25.350 15:59:28 -- dd/common.sh@18 -- # gen_conf 00:22:25.350 15:59:28 -- dd/common.sh@31 -- # xtrace_disable 00:22:25.350 15:59:28 -- common/autotest_common.sh@10 -- # set +x 00:22:25.350 [2024-07-22 15:59:28.106209] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:25.350 [2024-07-22 15:59:28.106307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57824 ] 00:22:25.350 { 00:22:25.350 "subsystems": [ 00:22:25.350 { 00:22:25.350 "subsystem": "bdev", 00:22:25.350 "config": [ 00:22:25.350 { 00:22:25.350 "params": { 00:22:25.350 "trtype": "pcie", 00:22:25.350 "traddr": "0000:00:06.0", 00:22:25.350 "name": "Nvme0" 00:22:25.350 }, 00:22:25.350 "method": "bdev_nvme_attach_controller" 00:22:25.350 }, 00:22:25.350 { 00:22:25.350 "method": "bdev_wait_for_examine" 00:22:25.350 } 00:22:25.350 ] 00:22:25.350 } 00:22:25.350 ] 00:22:25.350 } 00:22:25.609 [2024-07-22 15:59:28.237172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.609 [2024-07-22 15:59:28.318744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.867  Copying: 1024/1024 [kB] (average 1000 MBps) 00:22:25.867 00:22:25.867 15:59:28 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:22:25.867 15:59:28 -- dd/basic_rw.sh@23 -- # count=3 00:22:25.867 15:59:28 -- dd/basic_rw.sh@24 -- # count=3 00:22:25.867 15:59:28 -- dd/basic_rw.sh@25 -- # size=49152 00:22:25.867 15:59:28 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:22:25.867 15:59:28 -- dd/common.sh@98 -- # xtrace_disable 00:22:25.867 15:59:28 -- common/autotest_common.sh@10 -- # set +x 00:22:26.443 15:59:29 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:22:26.443 15:59:29 -- dd/basic_rw.sh@30 -- # gen_conf 00:22:26.443 15:59:29 -- dd/common.sh@31 -- # xtrace_disable 00:22:26.443 15:59:29 -- common/autotest_common.sh@10 -- # set +x 00:22:26.443 [2024-07-22 15:59:29.235300] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:26.443 [2024-07-22 15:59:29.235390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57842 ] 00:22:26.443 { 00:22:26.443 "subsystems": [ 00:22:26.443 { 00:22:26.443 "subsystem": "bdev", 00:22:26.443 "config": [ 00:22:26.443 { 00:22:26.443 "params": { 00:22:26.443 "trtype": "pcie", 00:22:26.443 "traddr": "0000:00:06.0", 00:22:26.443 "name": "Nvme0" 00:22:26.443 }, 00:22:26.443 "method": "bdev_nvme_attach_controller" 00:22:26.443 }, 00:22:26.443 { 00:22:26.443 "method": "bdev_wait_for_examine" 00:22:26.443 } 00:22:26.443 ] 00:22:26.443 } 00:22:26.443 ] 00:22:26.443 } 00:22:26.701 [2024-07-22 15:59:29.367122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.701 [2024-07-22 15:59:29.435627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.959  Copying: 48/48 [kB] (average 46 MBps) 00:22:26.959 00:22:26.959 15:59:29 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:22:26.959 15:59:29 -- dd/basic_rw.sh@37 -- # gen_conf 00:22:26.959 15:59:29 -- dd/common.sh@31 -- # xtrace_disable 00:22:26.959 15:59:29 -- common/autotest_common.sh@10 -- # set +x 00:22:26.959 [2024-07-22 15:59:29.808928] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:26.959 [2024-07-22 15:59:29.809019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57860 ] 00:22:26.959 { 00:22:26.959 "subsystems": [ 00:22:26.959 { 00:22:26.959 "subsystem": "bdev", 00:22:26.959 "config": [ 00:22:26.959 { 00:22:26.959 "params": { 00:22:26.959 "trtype": "pcie", 00:22:26.959 "traddr": "0000:00:06.0", 00:22:26.959 "name": "Nvme0" 00:22:26.959 }, 00:22:26.959 "method": "bdev_nvme_attach_controller" 00:22:26.959 }, 00:22:26.959 { 00:22:26.959 "method": "bdev_wait_for_examine" 00:22:26.959 } 00:22:26.959 ] 00:22:26.959 } 00:22:26.959 ] 00:22:26.959 } 00:22:27.217 [2024-07-22 15:59:29.939291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.217 [2024-07-22 15:59:30.009797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.733  Copying: 48/48 [kB] (average 46 MBps) 00:22:27.733 00:22:27.733 15:59:30 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:27.733 15:59:30 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:22:27.733 15:59:30 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:22:27.733 15:59:30 -- dd/common.sh@11 -- # local nvme_ref= 00:22:27.733 15:59:30 -- dd/common.sh@12 -- # local size=49152 00:22:27.733 15:59:30 -- dd/common.sh@14 -- # local bs=1048576 00:22:27.733 15:59:30 -- dd/common.sh@15 -- # local count=1 00:22:27.733 15:59:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:22:27.733 15:59:30 -- dd/common.sh@18 -- # gen_conf 00:22:27.733 15:59:30 -- dd/common.sh@31 -- # xtrace_disable 00:22:27.733 15:59:30 -- common/autotest_common.sh@10 -- # set +x 00:22:27.733 [2024-07-22 15:59:30.417124] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:27.733 [2024-07-22 15:59:30.417218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57872 ] 00:22:27.733 { 00:22:27.733 "subsystems": [ 00:22:27.733 { 00:22:27.733 "subsystem": "bdev", 00:22:27.733 "config": [ 00:22:27.733 { 00:22:27.733 "params": { 00:22:27.733 "trtype": "pcie", 00:22:27.733 "traddr": "0000:00:06.0", 00:22:27.733 "name": "Nvme0" 00:22:27.733 }, 00:22:27.733 "method": "bdev_nvme_attach_controller" 00:22:27.733 }, 00:22:27.733 { 00:22:27.733 "method": "bdev_wait_for_examine" 00:22:27.733 } 00:22:27.733 ] 00:22:27.733 } 00:22:27.733 ] 00:22:27.733 } 00:22:27.733 [2024-07-22 15:59:30.550735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.991 [2024-07-22 15:59:30.608642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.275  Copying: 1024/1024 [kB] (average 1000 MBps) 00:22:28.275 00:22:28.275 00:22:28.275 real 0m14.367s 00:22:28.275 user 0m11.035s 00:22:28.275 sys 0m2.234s 00:22:28.275 15:59:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:28.275 15:59:30 -- common/autotest_common.sh@10 -- # set +x 00:22:28.275 ************************************ 00:22:28.275 END TEST dd_rw 00:22:28.275 ************************************ 00:22:28.275 15:59:30 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:22:28.275 15:59:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:28.275 15:59:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:28.275 15:59:30 -- common/autotest_common.sh@10 -- # set +x 00:22:28.275 ************************************ 00:22:28.275 START TEST dd_rw_offset 00:22:28.275 ************************************ 00:22:28.275 15:59:30 -- common/autotest_common.sh@1104 -- # basic_offset 00:22:28.275 15:59:30 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:22:28.275 15:59:30 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:22:28.275 15:59:30 -- dd/common.sh@98 -- # xtrace_disable 00:22:28.275 15:59:30 -- common/autotest_common.sh@10 -- # set +x 00:22:28.275 15:59:31 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:22:28.276 15:59:31 -- dd/basic_rw.sh@56 -- # data=4sozbtc7huxjlshwcprhdvi0xzdq8d79jdmuieygdj33sqoq4mxccygm15xhaysn0nl96yo9mdhgpnsr99sv54isuwb5j5zpv69jb72sunmo5v5sjf8o7hfb1bvhh6ak2576u2knpuggz6tdzj6jvgrxvzl5ssditdnbku9ptot5dqyrft2nvyp096wbaaeccuor4a5s3wmps50u0vimhcn5j5upg4r8fbpzb7lzrcj058yydtz7ty9rqxljgaom4afgzbtjmret7wl8aj4pn2ggmo6cfmyqo04u34wf3azpgg44kw1ua78b3vdft9zl3lhzfb5iy2anf7mn11un5n4aq4b11p0gmv6oqzn810rutyew9s5pg1n066mw38lg4fnipxc8ro9p053gywngqg1xphnhpqygdlivsfxujpzd0y81etuzb59f4jrmtanwjk228esa52bgmyhzq532zb7kie9mp73pkf0fnd8fhjx0h19g4430vxw2eotvy340kylvs66f2wk4878ki4tut6kvo09y96voq6regoq6z7xa4vx6ua1wribp2dmghwo7qvyzz96wfx1cbf0vem5jhytrvezddi4iyx88i7vq1dn8lp8nfi1bclum6sqey7qptki7yla9qrca2d9iui3w5vsysophko47juy3o3vshfr1eottl110ggp5mym8r4040cac2tda07iu6ba35lwncy9zhnrooab4ialf0kxfx2dtypuui7nzz0wr5g2s2wvecikbtaefsr83d1bypiuocw05kvie133rctunso2dyrh9ieykz2ea61kuda2qio2kax7fhqsm2cb8rgbt05pdbsic0xkgtrolfdd8eirkuzb9yoe7q7d9w08wgwslzqpabifjxtk1v33thjumu24tqphy0szfxs3woyi5957g250l0ylwm7zythb03pix9stibs1hae2wi3hq33x0ote9yjj74yr5vswj2udwjaia77rmg9puhuazvjmhw5qhe3ouvyf9xehfrp80rpre9fs2qbx3olkg5g2oo6jn7y2816w5wv6ych44w8v27dwd6f0v7kc7la09niwfkzefknc7f296y0ntvssriph2me6zq3yoz1dkfonwyrz9mqyvjg1g2y27mhl3kdh18nzfb06gezu31l1rqsjy5hqczxs2q8ax1hm4hgk0aybn2goqd2k1hionxz1orufevjzxbvx4h03i1mevueqd8h5ddhvi8s2lt2wxazz43yyal8u9wmacx82ejd49t58zm5oh8zzqpenft4i51pyzaizhtsyda5e2wc4oxw82ouij6fk1zgrnj5ixcpjspb7rjxi6jpggbe1haqef515hpytuglvl0rzvmei2alfqc26p9v82j5077sb3ux67n6lpmv4k8dufhyrq3z893kq0nmlmcqr1jbokpvco9xade1l7sycr77vqhnklj5ngauoskdcmbd4siusdmtst3wp5ic9itjf99bchkab1wxh00ifenekzw8t7nlv0lpzcslfm7chdzdrylvgq61kgg723vkwztqqh4ztou274fjid7yv5f7ci33r2zz8t4vdrn4bxp7v3ryt8m6jud2v36akeus3pxyxcrs369jwy4iokc9utsr89z6oxfpcndntye62r9uhhlaohiwpowikb9wkk5vpe8wokjyqtsg3wl3qixoo0ucxff90ottgr12spumj3gkg5yvx4la9kzmg81fp0dakcj24qap9iglp21uv44ocyswxth4r0rv63bbijzqap7epp5rlbxfbcv061jaktu9nehzjtm28ai3c3mqoaaotf0u93nj27bdbjxqgwr1bkon30thcj9153gi5toayn9zt7w6atslvhn4p5qokzl2vcvowkt2hyafu26oq13n7n1tf6fyv3xgvb3bo2dahjrsw1ssr5zevuqw3gbp92g26p1homzev5klwafqa2ajb7fgzys5075eqtwicub8ory3vhdr302tlvhklrlqm6m2u1yqknedck19dt64jrydoe3iljmzfz7yu7w8oveqk16khmymf8ckeho86csu999t0skx6bq6l78u1qas997znx1lsvbvd01p8ax9hao97rxqjxpm9bi96utp0t1crm14ku579ijnny0t576snelyxunv996jsz9uowh2he8rv7bdivj5awkwg3g5nd15ppy652v5cx6n2pqssvcpc9z87dsw1jzkzc5yju5dhkxjyhseft6fyb28zryzakb73kadu2kwgk1mccfvxnt2ytyj1za3crcs5iklpoxt9cb9mja61u2nicwshf49z3sm99iis73eedxvaczoydsk6uk96g8ynum7si508mdolgw76l3y0wze63ea7l47yss6zfo9qh2cuyxiyiib32sficr0ih6pnx6il0zhg2w08v8vmfs0vtkjt7icvubrs3sokhh5hv0sn8x65chpzh07xkvzj52vy5fmomgjund1z4ifaj0k47kfujr8gwwlfunmn48six8qlwdtzfu1qvi38hdy5cfrfxs3ubr0lylms0v00isx0uk2ztr1aevbkgyoaldbeo7b9540to9vr2180vaigwd3on6f659m39aw55v93mt2b697zgcczi1r8hsdagmsrmde51e3jfr7so11s68fixwqbrq7vydzvoawdftwcbg723p2hikbkknifoetmjjodx099iua43agj0obz2n1c3ueza9oc91q0p8q5tot0ewdw5rxvf420m3kj4xz5aipt4rg18vh3sgpzp9jg3dwrx06mgn1488h439u0gnn69hgs7aj3b46289vyc2uv8icggg0kxts0ax0apz9wmnk0v776q239d5eyct3lgk5w5l4d7xvvpj23cazv66rmqer0hed91ke1ixswnor9tvu2llp499f093frdnpmgqldsy2x576fk0g4z9lusdrnfif60jq6ehucm7klqm9ifuniyl6kwas2qd7x78zuup4lqw5rzc9kucn2etcvin5pfkcz6ttpbz5705l8wvb5fek7eebhbwu8bqd6vk6vpu1hlqvu5wqndr8bi0xs5zyx5d0qig44d7kxfpzcqs3ovkar3fw4724infd4m2tkrtrgzcv3fqj9pz61x05y5je555ndyboow4qcaeazm13ez5hilvck7aoc0lliwnfb3zgsq6jiax5me8whp8xv5pqzueo382ahg4a7nj5kdi4u9k39jzrtvsksodd6tdj6vbsm02x5kte66k566ne5wxmu5ujp7898b41kqbqqor6po7f0s4rrak5gr23gr1kfpu3gn2irntzvecndn69cvxnq46ee55xhb5aqjo6p4qfu7qy4pzavddt04k4u2cxac6wr9voqgx0yn1rt8mmhtq3f8gbxgyt6zn487brafyhcclu166ngrwwgiw302qrf0zox5xs059rmfx8o12ppr5dnxatkrz5sujlddja0igto7m135nzzn1xs31jn7kklh0motrdku6tk9kazkl74wq9bqy36gtcdqbradwvr5kc1xqswzogbo6xhuxs1go7spjqzk4l658dyeg2rpd14sjct2t2cylrq74bfyyxeano557b9ubc10k7x9grxqlnxo4my7u84ymvbtebfuvx4nm8t3qxztsg7qsvaqa3323zfdmbbu1ok5sp2983mz7ifwf5ncabt77o4b056885k5rlqvm2o1ez93k0lswngobq3ocprexn6qoboftoxgy4ztnkr7jqhtm3rggxcmwt6by68yaurkomgwdbp122kiigpggwb8rbimokkll1uqpcl5k6g514ooaab8j1iqeogtthku5am43rvr0ijv10smx7t1o1kw37p1bclfgxlkjq9aybdd9v276jzth5dfe1dprw4pbbqoxpmb8shg5kuj15y3iziow2fh1jhbj6xp7g19c4syi1fzo9rm3us7e8u1ye5q5gmdho68o8q9ssf16w0zbemga65e61o00cv8vllymm6ezv8bkzzxkgjx4jmt6vdf6quk1n3wkx6u 00:22:28.276 15:59:31 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:22:28.276 15:59:31 -- dd/basic_rw.sh@59 -- # gen_conf 00:22:28.276 15:59:31 -- dd/common.sh@31 -- # xtrace_disable 00:22:28.276 15:59:31 -- common/autotest_common.sh@10 -- # set +x 00:22:28.276 [2024-07-22 15:59:31.100673] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:28.276 [2024-07-22 15:59:31.100795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57903 ] 00:22:28.276 { 00:22:28.276 "subsystems": [ 00:22:28.276 { 00:22:28.276 "subsystem": "bdev", 00:22:28.276 "config": [ 00:22:28.276 { 00:22:28.276 "params": { 00:22:28.276 "trtype": "pcie", 00:22:28.276 "traddr": "0000:00:06.0", 00:22:28.276 "name": "Nvme0" 00:22:28.276 }, 00:22:28.276 "method": "bdev_nvme_attach_controller" 00:22:28.276 }, 00:22:28.276 { 00:22:28.276 "method": "bdev_wait_for_examine" 00:22:28.276 } 00:22:28.276 ] 00:22:28.276 } 00:22:28.276 ] 00:22:28.276 } 00:22:28.533 [2024-07-22 15:59:31.242445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.533 [2024-07-22 15:59:31.301481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.791  Copying: 4096/4096 [B] (average 4000 kBps) 00:22:28.791 00:22:28.791 15:59:31 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:22:28.791 15:59:31 -- dd/basic_rw.sh@65 -- # gen_conf 00:22:28.791 15:59:31 -- dd/common.sh@31 -- # xtrace_disable 00:22:28.791 15:59:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.049 [2024-07-22 15:59:31.670678] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:29.049 [2024-07-22 15:59:31.670767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57915 ] 00:22:29.049 { 00:22:29.049 "subsystems": [ 00:22:29.049 { 00:22:29.049 "subsystem": "bdev", 00:22:29.049 "config": [ 00:22:29.049 { 00:22:29.049 "params": { 00:22:29.049 "trtype": "pcie", 00:22:29.049 "traddr": "0000:00:06.0", 00:22:29.049 "name": "Nvme0" 00:22:29.049 }, 00:22:29.049 "method": "bdev_nvme_attach_controller" 00:22:29.049 }, 00:22:29.049 { 00:22:29.049 "method": "bdev_wait_for_examine" 00:22:29.049 } 00:22:29.049 ] 00:22:29.049 } 00:22:29.049 ] 00:22:29.049 } 00:22:29.049 [2024-07-22 15:59:31.799209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.049 [2024-07-22 15:59:31.874302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.565  Copying: 4096/4096 [B] (average 4000 kBps) 00:22:29.565 00:22:29.565 15:59:32 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:22:29.566 15:59:32 -- dd/basic_rw.sh@72 -- # [[ 4sozbtc7huxjlshwcprhdvi0xzdq8d79jdmuieygdj33sqoq4mxccygm15xhaysn0nl96yo9mdhgpnsr99sv54isuwb5j5zpv69jb72sunmo5v5sjf8o7hfb1bvhh6ak2576u2knpuggz6tdzj6jvgrxvzl5ssditdnbku9ptot5dqyrft2nvyp096wbaaeccuor4a5s3wmps50u0vimhcn5j5upg4r8fbpzb7lzrcj058yydtz7ty9rqxljgaom4afgzbtjmret7wl8aj4pn2ggmo6cfmyqo04u34wf3azpgg44kw1ua78b3vdft9zl3lhzfb5iy2anf7mn11un5n4aq4b11p0gmv6oqzn810rutyew9s5pg1n066mw38lg4fnipxc8ro9p053gywngqg1xphnhpqygdlivsfxujpzd0y81etuzb59f4jrmtanwjk228esa52bgmyhzq532zb7kie9mp73pkf0fnd8fhjx0h19g4430vxw2eotvy340kylvs66f2wk4878ki4tut6kvo09y96voq6regoq6z7xa4vx6ua1wribp2dmghwo7qvyzz96wfx1cbf0vem5jhytrvezddi4iyx88i7vq1dn8lp8nfi1bclum6sqey7qptki7yla9qrca2d9iui3w5vsysophko47juy3o3vshfr1eottl110ggp5mym8r4040cac2tda07iu6ba35lwncy9zhnrooab4ialf0kxfx2dtypuui7nzz0wr5g2s2wvecikbtaefsr83d1bypiuocw05kvie133rctunso2dyrh9ieykz2ea61kuda2qio2kax7fhqsm2cb8rgbt05pdbsic0xkgtrolfdd8eirkuzb9yoe7q7d9w08wgwslzqpabifjxtk1v33thjumu24tqphy0szfxs3woyi5957g250l0ylwm7zythb03pix9stibs1hae2wi3hq33x0ote9yjj74yr5vswj2udwjaia77rmg9puhuazvjmhw5qhe3ouvyf9xehfrp80rpre9fs2qbx3olkg5g2oo6jn7y2816w5wv6ych44w8v27dwd6f0v7kc7la09niwfkzefknc7f296y0ntvssriph2me6zq3yoz1dkfonwyrz9mqyvjg1g2y27mhl3kdh18nzfb06gezu31l1rqsjy5hqczxs2q8ax1hm4hgk0aybn2goqd2k1hionxz1orufevjzxbvx4h03i1mevueqd8h5ddhvi8s2lt2wxazz43yyal8u9wmacx82ejd49t58zm5oh8zzqpenft4i51pyzaizhtsyda5e2wc4oxw82ouij6fk1zgrnj5ixcpjspb7rjxi6jpggbe1haqef515hpytuglvl0rzvmei2alfqc26p9v82j5077sb3ux67n6lpmv4k8dufhyrq3z893kq0nmlmcqr1jbokpvco9xade1l7sycr77vqhnklj5ngauoskdcmbd4siusdmtst3wp5ic9itjf99bchkab1wxh00ifenekzw8t7nlv0lpzcslfm7chdzdrylvgq61kgg723vkwztqqh4ztou274fjid7yv5f7ci33r2zz8t4vdrn4bxp7v3ryt8m6jud2v36akeus3pxyxcrs369jwy4iokc9utsr89z6oxfpcndntye62r9uhhlaohiwpowikb9wkk5vpe8wokjyqtsg3wl3qixoo0ucxff90ottgr12spumj3gkg5yvx4la9kzmg81fp0dakcj24qap9iglp21uv44ocyswxth4r0rv63bbijzqap7epp5rlbxfbcv061jaktu9nehzjtm28ai3c3mqoaaotf0u93nj27bdbjxqgwr1bkon30thcj9153gi5toayn9zt7w6atslvhn4p5qokzl2vcvowkt2hyafu26oq13n7n1tf6fyv3xgvb3bo2dahjrsw1ssr5zevuqw3gbp92g26p1homzev5klwafqa2ajb7fgzys5075eqtwicub8ory3vhdr302tlvhklrlqm6m2u1yqknedck19dt64jrydoe3iljmzfz7yu7w8oveqk16khmymf8ckeho86csu999t0skx6bq6l78u1qas997znx1lsvbvd01p8ax9hao97rxqjxpm9bi96utp0t1crm14ku579ijnny0t576snelyxunv996jsz9uowh2he8rv7bdivj5awkwg3g5nd15ppy652v5cx6n2pqssvcpc9z87dsw1jzkzc5yju5dhkxjyhseft6fyb28zryzakb73kadu2kwgk1mccfvxnt2ytyj1za3crcs5iklpoxt9cb9mja61u2nicwshf49z3sm99iis73eedxvaczoydsk6uk96g8ynum7si508mdolgw76l3y0wze63ea7l47yss6zfo9qh2cuyxiyiib32sficr0ih6pnx6il0zhg2w08v8vmfs0vtkjt7icvubrs3sokhh5hv0sn8x65chpzh07xkvzj52vy5fmomgjund1z4ifaj0k47kfujr8gwwlfunmn48six8qlwdtzfu1qvi38hdy5cfrfxs3ubr0lylms0v00isx0uk2ztr1aevbkgyoaldbeo7b9540to9vr2180vaigwd3on6f659m39aw55v93mt2b697zgcczi1r8hsdagmsrmde51e3jfr7so11s68fixwqbrq7vydzvoawdftwcbg723p2hikbkknifoetmjjodx099iua43agj0obz2n1c3ueza9oc91q0p8q5tot0ewdw5rxvf420m3kj4xz5aipt4rg18vh3sgpzp9jg3dwrx06mgn1488h439u0gnn69hgs7aj3b46289vyc2uv8icggg0kxts0ax0apz9wmnk0v776q239d5eyct3lgk5w5l4d7xvvpj23cazv66rmqer0hed91ke1ixswnor9tvu2llp499f093frdnpmgqldsy2x576fk0g4z9lusdrnfif60jq6ehucm7klqm9ifuniyl6kwas2qd7x78zuup4lqw5rzc9kucn2etcvin5pfkcz6ttpbz5705l8wvb5fek7eebhbwu8bqd6vk6vpu1hlqvu5wqndr8bi0xs5zyx5d0qig44d7kxfpzcqs3ovkar3fw4724infd4m2tkrtrgzcv3fqj9pz61x05y5je555ndyboow4qcaeazm13ez5hilvck7aoc0lliwnfb3zgsq6jiax5me8whp8xv5pqzueo382ahg4a7nj5kdi4u9k39jzrtvsksodd6tdj6vbsm02x5kte66k566ne5wxmu5ujp7898b41kqbqqor6po7f0s4rrak5gr23gr1kfpu3gn2irntzvecndn69cvxnq46ee55xhb5aqjo6p4qfu7qy4pzavddt04k4u2cxac6wr9voqgx0yn1rt8mmhtq3f8gbxgyt6zn487brafyhcclu166ngrwwgiw302qrf0zox5xs059rmfx8o12ppr5dnxatkrz5sujlddja0igto7m135nzzn1xs31jn7kklh0motrdku6tk9kazkl74wq9bqy36gtcdqbradwvr5kc1xqswzogbo6xhuxs1go7spjqzk4l658dyeg2rpd14sjct2t2cylrq74bfyyxeano557b9ubc10k7x9grxqlnxo4my7u84ymvbtebfuvx4nm8t3qxztsg7qsvaqa3323zfdmbbu1ok5sp2983mz7ifwf5ncabt77o4b056885k5rlqvm2o1ez93k0lswngobq3ocprexn6qoboftoxgy4ztnkr7jqhtm3rggxcmwt6by68yaurkomgwdbp122kiigpggwb8rbimokkll1uqpcl5k6g514ooaab8j1iqeogtthku5am43rvr0ijv10smx7t1o1kw37p1bclfgxlkjq9aybdd9v276jzth5dfe1dprw4pbbqoxpmb8shg5kuj15y3iziow2fh1jhbj6xp7g19c4syi1fzo9rm3us7e8u1ye5q5gmdho68o8q9ssf16w0zbemga65e61o00cv8vllymm6ezv8bkzzxkgjx4jmt6vdf6quk1n3wkx6u == \4\s\o\z\b\t\c\7\h\u\x\j\l\s\h\w\c\p\r\h\d\v\i\0\x\z\d\q\8\d\7\9\j\d\m\u\i\e\y\g\d\j\3\3\s\q\o\q\4\m\x\c\c\y\g\m\1\5\x\h\a\y\s\n\0\n\l\9\6\y\o\9\m\d\h\g\p\n\s\r\9\9\s\v\5\4\i\s\u\w\b\5\j\5\z\p\v\6\9\j\b\7\2\s\u\n\m\o\5\v\5\s\j\f\8\o\7\h\f\b\1\b\v\h\h\6\a\k\2\5\7\6\u\2\k\n\p\u\g\g\z\6\t\d\z\j\6\j\v\g\r\x\v\z\l\5\s\s\d\i\t\d\n\b\k\u\9\p\t\o\t\5\d\q\y\r\f\t\2\n\v\y\p\0\9\6\w\b\a\a\e\c\c\u\o\r\4\a\5\s\3\w\m\p\s\5\0\u\0\v\i\m\h\c\n\5\j\5\u\p\g\4\r\8\f\b\p\z\b\7\l\z\r\c\j\0\5\8\y\y\d\t\z\7\t\y\9\r\q\x\l\j\g\a\o\m\4\a\f\g\z\b\t\j\m\r\e\t\7\w\l\8\a\j\4\p\n\2\g\g\m\o\6\c\f\m\y\q\o\0\4\u\3\4\w\f\3\a\z\p\g\g\4\4\k\w\1\u\a\7\8\b\3\v\d\f\t\9\z\l\3\l\h\z\f\b\5\i\y\2\a\n\f\7\m\n\1\1\u\n\5\n\4\a\q\4\b\1\1\p\0\g\m\v\6\o\q\z\n\8\1\0\r\u\t\y\e\w\9\s\5\p\g\1\n\0\6\6\m\w\3\8\l\g\4\f\n\i\p\x\c\8\r\o\9\p\0\5\3\g\y\w\n\g\q\g\1\x\p\h\n\h\p\q\y\g\d\l\i\v\s\f\x\u\j\p\z\d\0\y\8\1\e\t\u\z\b\5\9\f\4\j\r\m\t\a\n\w\j\k\2\2\8\e\s\a\5\2\b\g\m\y\h\z\q\5\3\2\z\b\7\k\i\e\9\m\p\7\3\p\k\f\0\f\n\d\8\f\h\j\x\0\h\1\9\g\4\4\3\0\v\x\w\2\e\o\t\v\y\3\4\0\k\y\l\v\s\6\6\f\2\w\k\4\8\7\8\k\i\4\t\u\t\6\k\v\o\0\9\y\9\6\v\o\q\6\r\e\g\o\q\6\z\7\x\a\4\v\x\6\u\a\1\w\r\i\b\p\2\d\m\g\h\w\o\7\q\v\y\z\z\9\6\w\f\x\1\c\b\f\0\v\e\m\5\j\h\y\t\r\v\e\z\d\d\i\4\i\y\x\8\8\i\7\v\q\1\d\n\8\l\p\8\n\f\i\1\b\c\l\u\m\6\s\q\e\y\7\q\p\t\k\i\7\y\l\a\9\q\r\c\a\2\d\9\i\u\i\3\w\5\v\s\y\s\o\p\h\k\o\4\7\j\u\y\3\o\3\v\s\h\f\r\1\e\o\t\t\l\1\1\0\g\g\p\5\m\y\m\8\r\4\0\4\0\c\a\c\2\t\d\a\0\7\i\u\6\b\a\3\5\l\w\n\c\y\9\z\h\n\r\o\o\a\b\4\i\a\l\f\0\k\x\f\x\2\d\t\y\p\u\u\i\7\n\z\z\0\w\r\5\g\2\s\2\w\v\e\c\i\k\b\t\a\e\f\s\r\8\3\d\1\b\y\p\i\u\o\c\w\0\5\k\v\i\e\1\3\3\r\c\t\u\n\s\o\2\d\y\r\h\9\i\e\y\k\z\2\e\a\6\1\k\u\d\a\2\q\i\o\2\k\a\x\7\f\h\q\s\m\2\c\b\8\r\g\b\t\0\5\p\d\b\s\i\c\0\x\k\g\t\r\o\l\f\d\d\8\e\i\r\k\u\z\b\9\y\o\e\7\q\7\d\9\w\0\8\w\g\w\s\l\z\q\p\a\b\i\f\j\x\t\k\1\v\3\3\t\h\j\u\m\u\2\4\t\q\p\h\y\0\s\z\f\x\s\3\w\o\y\i\5\9\5\7\g\2\5\0\l\0\y\l\w\m\7\z\y\t\h\b\0\3\p\i\x\9\s\t\i\b\s\1\h\a\e\2\w\i\3\h\q\3\3\x\0\o\t\e\9\y\j\j\7\4\y\r\5\v\s\w\j\2\u\d\w\j\a\i\a\7\7\r\m\g\9\p\u\h\u\a\z\v\j\m\h\w\5\q\h\e\3\o\u\v\y\f\9\x\e\h\f\r\p\8\0\r\p\r\e\9\f\s\2\q\b\x\3\o\l\k\g\5\g\2\o\o\6\j\n\7\y\2\8\1\6\w\5\w\v\6\y\c\h\4\4\w\8\v\2\7\d\w\d\6\f\0\v\7\k\c\7\l\a\0\9\n\i\w\f\k\z\e\f\k\n\c\7\f\2\9\6\y\0\n\t\v\s\s\r\i\p\h\2\m\e\6\z\q\3\y\o\z\1\d\k\f\o\n\w\y\r\z\9\m\q\y\v\j\g\1\g\2\y\2\7\m\h\l\3\k\d\h\1\8\n\z\f\b\0\6\g\e\z\u\3\1\l\1\r\q\s\j\y\5\h\q\c\z\x\s\2\q\8\a\x\1\h\m\4\h\g\k\0\a\y\b\n\2\g\o\q\d\2\k\1\h\i\o\n\x\z\1\o\r\u\f\e\v\j\z\x\b\v\x\4\h\0\3\i\1\m\e\v\u\e\q\d\8\h\5\d\d\h\v\i\8\s\2\l\t\2\w\x\a\z\z\4\3\y\y\a\l\8\u\9\w\m\a\c\x\8\2\e\j\d\4\9\t\5\8\z\m\5\o\h\8\z\z\q\p\e\n\f\t\4\i\5\1\p\y\z\a\i\z\h\t\s\y\d\a\5\e\2\w\c\4\o\x\w\8\2\o\u\i\j\6\f\k\1\z\g\r\n\j\5\i\x\c\p\j\s\p\b\7\r\j\x\i\6\j\p\g\g\b\e\1\h\a\q\e\f\5\1\5\h\p\y\t\u\g\l\v\l\0\r\z\v\m\e\i\2\a\l\f\q\c\2\6\p\9\v\8\2\j\5\0\7\7\s\b\3\u\x\6\7\n\6\l\p\m\v\4\k\8\d\u\f\h\y\r\q\3\z\8\9\3\k\q\0\n\m\l\m\c\q\r\1\j\b\o\k\p\v\c\o\9\x\a\d\e\1\l\7\s\y\c\r\7\7\v\q\h\n\k\l\j\5\n\g\a\u\o\s\k\d\c\m\b\d\4\s\i\u\s\d\m\t\s\t\3\w\p\5\i\c\9\i\t\j\f\9\9\b\c\h\k\a\b\1\w\x\h\0\0\i\f\e\n\e\k\z\w\8\t\7\n\l\v\0\l\p\z\c\s\l\f\m\7\c\h\d\z\d\r\y\l\v\g\q\6\1\k\g\g\7\2\3\v\k\w\z\t\q\q\h\4\z\t\o\u\2\7\4\f\j\i\d\7\y\v\5\f\7\c\i\3\3\r\2\z\z\8\t\4\v\d\r\n\4\b\x\p\7\v\3\r\y\t\8\m\6\j\u\d\2\v\3\6\a\k\e\u\s\3\p\x\y\x\c\r\s\3\6\9\j\w\y\4\i\o\k\c\9\u\t\s\r\8\9\z\6\o\x\f\p\c\n\d\n\t\y\e\6\2\r\9\u\h\h\l\a\o\h\i\w\p\o\w\i\k\b\9\w\k\k\5\v\p\e\8\w\o\k\j\y\q\t\s\g\3\w\l\3\q\i\x\o\o\0\u\c\x\f\f\9\0\o\t\t\g\r\1\2\s\p\u\m\j\3\g\k\g\5\y\v\x\4\l\a\9\k\z\m\g\8\1\f\p\0\d\a\k\c\j\2\4\q\a\p\9\i\g\l\p\2\1\u\v\4\4\o\c\y\s\w\x\t\h\4\r\0\r\v\6\3\b\b\i\j\z\q\a\p\7\e\p\p\5\r\l\b\x\f\b\c\v\0\6\1\j\a\k\t\u\9\n\e\h\z\j\t\m\2\8\a\i\3\c\3\m\q\o\a\a\o\t\f\0\u\9\3\n\j\2\7\b\d\b\j\x\q\g\w\r\1\b\k\o\n\3\0\t\h\c\j\9\1\5\3\g\i\5\t\o\a\y\n\9\z\t\7\w\6\a\t\s\l\v\h\n\4\p\5\q\o\k\z\l\2\v\c\v\o\w\k\t\2\h\y\a\f\u\2\6\o\q\1\3\n\7\n\1\t\f\6\f\y\v\3\x\g\v\b\3\b\o\2\d\a\h\j\r\s\w\1\s\s\r\5\z\e\v\u\q\w\3\g\b\p\9\2\g\2\6\p\1\h\o\m\z\e\v\5\k\l\w\a\f\q\a\2\a\j\b\7\f\g\z\y\s\5\0\7\5\e\q\t\w\i\c\u\b\8\o\r\y\3\v\h\d\r\3\0\2\t\l\v\h\k\l\r\l\q\m\6\m\2\u\1\y\q\k\n\e\d\c\k\1\9\d\t\6\4\j\r\y\d\o\e\3\i\l\j\m\z\f\z\7\y\u\7\w\8\o\v\e\q\k\1\6\k\h\m\y\m\f\8\c\k\e\h\o\8\6\c\s\u\9\9\9\t\0\s\k\x\6\b\q\6\l\7\8\u\1\q\a\s\9\9\7\z\n\x\1\l\s\v\b\v\d\0\1\p\8\a\x\9\h\a\o\9\7\r\x\q\j\x\p\m\9\b\i\9\6\u\t\p\0\t\1\c\r\m\1\4\k\u\5\7\9\i\j\n\n\y\0\t\5\7\6\s\n\e\l\y\x\u\n\v\9\9\6\j\s\z\9\u\o\w\h\2\h\e\8\r\v\7\b\d\i\v\j\5\a\w\k\w\g\3\g\5\n\d\1\5\p\p\y\6\5\2\v\5\c\x\6\n\2\p\q\s\s\v\c\p\c\9\z\8\7\d\s\w\1\j\z\k\z\c\5\y\j\u\5\d\h\k\x\j\y\h\s\e\f\t\6\f\y\b\2\8\z\r\y\z\a\k\b\7\3\k\a\d\u\2\k\w\g\k\1\m\c\c\f\v\x\n\t\2\y\t\y\j\1\z\a\3\c\r\c\s\5\i\k\l\p\o\x\t\9\c\b\9\m\j\a\6\1\u\2\n\i\c\w\s\h\f\4\9\z\3\s\m\9\9\i\i\s\7\3\e\e\d\x\v\a\c\z\o\y\d\s\k\6\u\k\9\6\g\8\y\n\u\m\7\s\i\5\0\8\m\d\o\l\g\w\7\6\l\3\y\0\w\z\e\6\3\e\a\7\l\4\7\y\s\s\6\z\f\o\9\q\h\2\c\u\y\x\i\y\i\i\b\3\2\s\f\i\c\r\0\i\h\6\p\n\x\6\i\l\0\z\h\g\2\w\0\8\v\8\v\m\f\s\0\v\t\k\j\t\7\i\c\v\u\b\r\s\3\s\o\k\h\h\5\h\v\0\s\n\8\x\6\5\c\h\p\z\h\0\7\x\k\v\z\j\5\2\v\y\5\f\m\o\m\g\j\u\n\d\1\z\4\i\f\a\j\0\k\4\7\k\f\u\j\r\8\g\w\w\l\f\u\n\m\n\4\8\s\i\x\8\q\l\w\d\t\z\f\u\1\q\v\i\3\8\h\d\y\5\c\f\r\f\x\s\3\u\b\r\0\l\y\l\m\s\0\v\0\0\i\s\x\0\u\k\2\z\t\r\1\a\e\v\b\k\g\y\o\a\l\d\b\e\o\7\b\9\5\4\0\t\o\9\v\r\2\1\8\0\v\a\i\g\w\d\3\o\n\6\f\6\5\9\m\3\9\a\w\5\5\v\9\3\m\t\2\b\6\9\7\z\g\c\c\z\i\1\r\8\h\s\d\a\g\m\s\r\m\d\e\5\1\e\3\j\f\r\7\s\o\1\1\s\6\8\f\i\x\w\q\b\r\q\7\v\y\d\z\v\o\a\w\d\f\t\w\c\b\g\7\2\3\p\2\h\i\k\b\k\k\n\i\f\o\e\t\m\j\j\o\d\x\0\9\9\i\u\a\4\3\a\g\j\0\o\b\z\2\n\1\c\3\u\e\z\a\9\o\c\9\1\q\0\p\8\q\5\t\o\t\0\e\w\d\w\5\r\x\v\f\4\2\0\m\3\k\j\4\x\z\5\a\i\p\t\4\r\g\1\8\v\h\3\s\g\p\z\p\9\j\g\3\d\w\r\x\0\6\m\g\n\1\4\8\8\h\4\3\9\u\0\g\n\n\6\9\h\g\s\7\a\j\3\b\4\6\2\8\9\v\y\c\2\u\v\8\i\c\g\g\g\0\k\x\t\s\0\a\x\0\a\p\z\9\w\m\n\k\0\v\7\7\6\q\2\3\9\d\5\e\y\c\t\3\l\g\k\5\w\5\l\4\d\7\x\v\v\p\j\2\3\c\a\z\v\6\6\r\m\q\e\r\0\h\e\d\9\1\k\e\1\i\x\s\w\n\o\r\9\t\v\u\2\l\l\p\4\9\9\f\0\9\3\f\r\d\n\p\m\g\q\l\d\s\y\2\x\5\7\6\f\k\0\g\4\z\9\l\u\s\d\r\n\f\i\f\6\0\j\q\6\e\h\u\c\m\7\k\l\q\m\9\i\f\u\n\i\y\l\6\k\w\a\s\2\q\d\7\x\7\8\z\u\u\p\4\l\q\w\5\r\z\c\9\k\u\c\n\2\e\t\c\v\i\n\5\p\f\k\c\z\6\t\t\p\b\z\5\7\0\5\l\8\w\v\b\5\f\e\k\7\e\e\b\h\b\w\u\8\b\q\d\6\v\k\6\v\p\u\1\h\l\q\v\u\5\w\q\n\d\r\8\b\i\0\x\s\5\z\y\x\5\d\0\q\i\g\4\4\d\7\k\x\f\p\z\c\q\s\3\o\v\k\a\r\3\f\w\4\7\2\4\i\n\f\d\4\m\2\t\k\r\t\r\g\z\c\v\3\f\q\j\9\p\z\6\1\x\0\5\y\5\j\e\5\5\5\n\d\y\b\o\o\w\4\q\c\a\e\a\z\m\1\3\e\z\5\h\i\l\v\c\k\7\a\o\c\0\l\l\i\w\n\f\b\3\z\g\s\q\6\j\i\a\x\5\m\e\8\w\h\p\8\x\v\5\p\q\z\u\e\o\3\8\2\a\h\g\4\a\7\n\j\5\k\d\i\4\u\9\k\3\9\j\z\r\t\v\s\k\s\o\d\d\6\t\d\j\6\v\b\s\m\0\2\x\5\k\t\e\6\6\k\5\6\6\n\e\5\w\x\m\u\5\u\j\p\7\8\9\8\b\4\1\k\q\b\q\q\o\r\6\p\o\7\f\0\s\4\r\r\a\k\5\g\r\2\3\g\r\1\k\f\p\u\3\g\n\2\i\r\n\t\z\v\e\c\n\d\n\6\9\c\v\x\n\q\4\6\e\e\5\5\x\h\b\5\a\q\j\o\6\p\4\q\f\u\7\q\y\4\p\z\a\v\d\d\t\0\4\k\4\u\2\c\x\a\c\6\w\r\9\v\o\q\g\x\0\y\n\1\r\t\8\m\m\h\t\q\3\f\8\g\b\x\g\y\t\6\z\n\4\8\7\b\r\a\f\y\h\c\c\l\u\1\6\6\n\g\r\w\w\g\i\w\3\0\2\q\r\f\0\z\o\x\5\x\s\0\5\9\r\m\f\x\8\o\1\2\p\p\r\5\d\n\x\a\t\k\r\z\5\s\u\j\l\d\d\j\a\0\i\g\t\o\7\m\1\3\5\n\z\z\n\1\x\s\3\1\j\n\7\k\k\l\h\0\m\o\t\r\d\k\u\6\t\k\9\k\a\z\k\l\7\4\w\q\9\b\q\y\3\6\g\t\c\d\q\b\r\a\d\w\v\r\5\k\c\1\x\q\s\w\z\o\g\b\o\6\x\h\u\x\s\1\g\o\7\s\p\j\q\z\k\4\l\6\5\8\d\y\e\g\2\r\p\d\1\4\s\j\c\t\2\t\2\c\y\l\r\q\7\4\b\f\y\y\x\e\a\n\o\5\5\7\b\9\u\b\c\1\0\k\7\x\9\g\r\x\q\l\n\x\o\4\m\y\7\u\8\4\y\m\v\b\t\e\b\f\u\v\x\4\n\m\8\t\3\q\x\z\t\s\g\7\q\s\v\a\q\a\3\3\2\3\z\f\d\m\b\b\u\1\o\k\5\s\p\2\9\8\3\m\z\7\i\f\w\f\5\n\c\a\b\t\7\7\o\4\b\0\5\6\8\8\5\k\5\r\l\q\v\m\2\o\1\e\z\9\3\k\0\l\s\w\n\g\o\b\q\3\o\c\p\r\e\x\n\6\q\o\b\o\f\t\o\x\g\y\4\z\t\n\k\r\7\j\q\h\t\m\3\r\g\g\x\c\m\w\t\6\b\y\6\8\y\a\u\r\k\o\m\g\w\d\b\p\1\2\2\k\i\i\g\p\g\g\w\b\8\r\b\i\m\o\k\k\l\l\1\u\q\p\c\l\5\k\6\g\5\1\4\o\o\a\a\b\8\j\1\i\q\e\o\g\t\t\h\k\u\5\a\m\4\3\r\v\r\0\i\j\v\1\0\s\m\x\7\t\1\o\1\k\w\3\7\p\1\b\c\l\f\g\x\l\k\j\q\9\a\y\b\d\d\9\v\2\7\6\j\z\t\h\5\d\f\e\1\d\p\r\w\4\p\b\b\q\o\x\p\m\b\8\s\h\g\5\k\u\j\1\5\y\3\i\z\i\o\w\2\f\h\1\j\h\b\j\6\x\p\7\g\1\9\c\4\s\y\i\1\f\z\o\9\r\m\3\u\s\7\e\8\u\1\y\e\5\q\5\g\m\d\h\o\6\8\o\8\q\9\s\s\f\1\6\w\0\z\b\e\m\g\a\6\5\e\6\1\o\0\0\c\v\8\v\l\l\y\m\m\6\e\z\v\8\b\k\z\z\x\k\g\j\x\4\j\m\t\6\v\d\f\6\q\u\k\1\n\3\w\k\x\6\u ]] 00:22:29.566 00:22:29.566 real 0m1.231s 00:22:29.566 user 0m0.880s 00:22:29.566 sys 0m0.227s 00:22:29.566 15:59:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.566 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:22:29.566 ************************************ 00:22:29.566 END TEST dd_rw_offset 00:22:29.566 ************************************ 00:22:29.566 15:59:32 -- dd/basic_rw.sh@1 -- # cleanup 00:22:29.566 15:59:32 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:22:29.566 15:59:32 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:22:29.566 15:59:32 -- dd/common.sh@11 -- # local nvme_ref= 00:22:29.566 15:59:32 -- dd/common.sh@12 -- # local size=0xffff 00:22:29.566 15:59:32 -- dd/common.sh@14 -- # local bs=1048576 00:22:29.566 15:59:32 -- dd/common.sh@15 -- # local count=1 00:22:29.566 15:59:32 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:22:29.566 15:59:32 -- dd/common.sh@18 -- # gen_conf 00:22:29.566 15:59:32 -- dd/common.sh@31 -- # xtrace_disable 00:22:29.566 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:22:29.566 [2024-07-22 15:59:32.282635] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:29.566 [2024-07-22 15:59:32.282741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57943 ] 00:22:29.566 { 00:22:29.566 "subsystems": [ 00:22:29.566 { 00:22:29.566 "subsystem": "bdev", 00:22:29.566 "config": [ 00:22:29.566 { 00:22:29.566 "params": { 00:22:29.566 "trtype": "pcie", 00:22:29.566 "traddr": "0000:00:06.0", 00:22:29.566 "name": "Nvme0" 00:22:29.566 }, 00:22:29.566 "method": "bdev_nvme_attach_controller" 00:22:29.566 }, 00:22:29.566 { 00:22:29.566 "method": "bdev_wait_for_examine" 00:22:29.566 } 00:22:29.566 ] 00:22:29.566 } 00:22:29.566 ] 00:22:29.566 } 00:22:29.566 [2024-07-22 15:59:32.417935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.823 [2024-07-22 15:59:32.475996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.080  Copying: 1024/1024 [kB] (average 1000 MBps) 00:22:30.080 00:22:30.080 15:59:32 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:30.080 00:22:30.080 real 0m17.124s 00:22:30.080 user 0m12.845s 00:22:30.080 sys 0m2.847s 00:22:30.080 15:59:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.080 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:22:30.080 ************************************ 00:22:30.080 END TEST spdk_dd_basic_rw 00:22:30.080 ************************************ 00:22:30.081 15:59:32 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:22:30.081 15:59:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:30.081 15:59:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:30.081 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:22:30.081 ************************************ 00:22:30.081 START TEST spdk_dd_posix 00:22:30.081 ************************************ 00:22:30.081 15:59:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:22:30.081 * Looking for test storage... 00:22:30.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:22:30.081 15:59:32 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:30.081 15:59:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.081 15:59:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.081 15:59:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.081 15:59:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.081 15:59:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.081 15:59:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.081 15:59:32 -- paths/export.sh@5 -- # export PATH 00:22:30.081 15:59:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.081 15:59:32 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:22:30.081 15:59:32 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:22:30.081 15:59:32 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:22:30.081 15:59:32 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:22:30.081 15:59:32 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:30.081 15:59:32 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:30.081 15:59:32 -- dd/posix.sh@130 -- # tests 00:22:30.081 15:59:32 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:22:30.081 * First test run, liburing in use 00:22:30.081 15:59:32 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:22:30.081 15:59:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:30.081 15:59:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:30.081 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:22:30.081 ************************************ 00:22:30.081 START TEST dd_flag_append 00:22:30.081 ************************************ 00:22:30.081 15:59:32 -- common/autotest_common.sh@1104 -- # append 00:22:30.081 15:59:32 -- dd/posix.sh@16 -- # local dump0 00:22:30.081 15:59:32 -- dd/posix.sh@17 -- # local dump1 00:22:30.081 15:59:32 -- dd/posix.sh@19 -- # gen_bytes 32 00:22:30.081 15:59:32 -- dd/common.sh@98 -- # xtrace_disable 00:22:30.081 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:22:30.081 15:59:32 -- dd/posix.sh@19 -- # dump0=9gnve5pjn3itn7w89nrxysihhe85t2tn 00:22:30.081 15:59:32 -- dd/posix.sh@20 -- # gen_bytes 32 00:22:30.081 15:59:32 -- dd/common.sh@98 -- # xtrace_disable 00:22:30.081 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:22:30.081 15:59:32 -- dd/posix.sh@20 -- # dump1=dizodcv38y88ce6c69v9hfhhcjndnkqk 00:22:30.081 15:59:32 -- dd/posix.sh@22 -- # printf %s 9gnve5pjn3itn7w89nrxysihhe85t2tn 00:22:30.081 15:59:32 -- dd/posix.sh@23 -- # printf %s dizodcv38y88ce6c69v9hfhhcjndnkqk 00:22:30.081 15:59:32 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:22:30.337 [2024-07-22 15:59:32.975126] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:30.337 [2024-07-22 15:59:32.976140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58005 ] 00:22:30.337 [2024-07-22 15:59:33.115301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.337 [2024-07-22 15:59:33.195134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.594  Copying: 32/32 [B] (average 31 kBps) 00:22:30.594 00:22:30.852 ************************************ 00:22:30.852 END TEST dd_flag_append 00:22:30.852 ************************************ 00:22:30.852 15:59:33 -- dd/posix.sh@27 -- # [[ dizodcv38y88ce6c69v9hfhhcjndnkqk9gnve5pjn3itn7w89nrxysihhe85t2tn == \d\i\z\o\d\c\v\3\8\y\8\8\c\e\6\c\6\9\v\9\h\f\h\h\c\j\n\d\n\k\q\k\9\g\n\v\e\5\p\j\n\3\i\t\n\7\w\8\9\n\r\x\y\s\i\h\h\e\8\5\t\2\t\n ]] 00:22:30.852 00:22:30.852 real 0m0.552s 00:22:30.852 user 0m0.328s 00:22:30.852 sys 0m0.101s 00:22:30.852 15:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.852 15:59:33 -- common/autotest_common.sh@10 -- # set +x 00:22:30.852 15:59:33 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:22:30.852 15:59:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:30.852 15:59:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:30.852 15:59:33 -- common/autotest_common.sh@10 -- # set +x 00:22:30.852 ************************************ 00:22:30.852 START TEST dd_flag_directory 00:22:30.852 ************************************ 00:22:30.852 15:59:33 -- common/autotest_common.sh@1104 -- # directory 00:22:30.852 15:59:33 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:30.852 15:59:33 -- common/autotest_common.sh@640 -- # local es=0 00:22:30.852 15:59:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:30.852 15:59:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.852 15:59:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:30.852 15:59:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.852 15:59:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:30.852 15:59:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.852 15:59:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:30.852 15:59:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:30.852 15:59:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:30.852 15:59:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:30.852 [2024-07-22 15:59:33.545432] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:30.852 [2024-07-22 15:59:33.545535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58026 ] 00:22:30.852 [2024-07-22 15:59:33.681194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.110 [2024-07-22 15:59:33.739520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.110 [2024-07-22 15:59:33.787263] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:22:31.110 [2024-07-22 15:59:33.787330] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:22:31.110 [2024-07-22 15:59:33.787347] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:31.110 [2024-07-22 15:59:33.850543] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:31.110 15:59:33 -- common/autotest_common.sh@643 -- # es=236 00:22:31.110 15:59:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:31.110 15:59:33 -- common/autotest_common.sh@652 -- # es=108 00:22:31.110 15:59:33 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:31.110 15:59:33 -- common/autotest_common.sh@660 -- # es=1 00:22:31.110 15:59:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:31.110 15:59:33 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:22:31.110 15:59:33 -- common/autotest_common.sh@640 -- # local es=0 00:22:31.110 15:59:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:22:31.110 15:59:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.110 15:59:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:31.110 15:59:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.110 15:59:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:31.110 15:59:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.110 15:59:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:31.110 15:59:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.110 15:59:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:31.110 15:59:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:22:31.368 [2024-07-22 15:59:34.001258] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:31.368 [2024-07-22 15:59:34.001350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58041 ] 00:22:31.368 [2024-07-22 15:59:34.136324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.368 [2024-07-22 15:59:34.193986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.626 [2024-07-22 15:59:34.240846] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:22:31.626 [2024-07-22 15:59:34.240906] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:22:31.626 [2024-07-22 15:59:34.240922] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:31.626 [2024-07-22 15:59:34.307197] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:31.626 15:59:34 -- common/autotest_common.sh@643 -- # es=236 00:22:31.626 15:59:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:31.626 15:59:34 -- common/autotest_common.sh@652 -- # es=108 00:22:31.626 ************************************ 00:22:31.626 END TEST dd_flag_directory 00:22:31.626 ************************************ 00:22:31.626 15:59:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:31.626 15:59:34 -- common/autotest_common.sh@660 -- # es=1 00:22:31.626 15:59:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:31.626 00:22:31.626 real 0m0.916s 00:22:31.626 user 0m0.528s 00:22:31.626 sys 0m0.179s 00:22:31.626 15:59:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.626 15:59:34 -- common/autotest_common.sh@10 -- # set +x 00:22:31.626 15:59:34 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:22:31.626 15:59:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:31.626 15:59:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:31.626 15:59:34 -- common/autotest_common.sh@10 -- # set +x 00:22:31.626 ************************************ 00:22:31.626 START TEST dd_flag_nofollow 00:22:31.626 ************************************ 00:22:31.626 15:59:34 -- common/autotest_common.sh@1104 -- # nofollow 00:22:31.626 15:59:34 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:22:31.626 15:59:34 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:22:31.626 15:59:34 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:22:31.626 15:59:34 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:22:31.626 15:59:34 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:31.626 15:59:34 -- common/autotest_common.sh@640 -- # local es=0 00:22:31.626 15:59:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:31.626 15:59:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.626 15:59:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:31.626 15:59:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.626 15:59:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:31.626 15:59:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.626 15:59:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:31.626 15:59:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:31.626 15:59:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:31.626 15:59:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:31.883 [2024-07-22 15:59:34.510815] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:31.883 [2024-07-22 15:59:34.510903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58064 ] 00:22:31.883 [2024-07-22 15:59:34.641778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.883 [2024-07-22 15:59:34.725889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.141 [2024-07-22 15:59:34.779026] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:22:32.141 [2024-07-22 15:59:34.779081] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:22:32.141 [2024-07-22 15:59:34.779097] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:32.141 [2024-07-22 15:59:34.844252] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:32.141 15:59:34 -- common/autotest_common.sh@643 -- # es=216 00:22:32.141 15:59:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:32.141 15:59:34 -- common/autotest_common.sh@652 -- # es=88 00:22:32.141 15:59:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:32.141 15:59:34 -- common/autotest_common.sh@660 -- # es=1 00:22:32.141 15:59:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:32.141 15:59:34 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:22:32.141 15:59:34 -- common/autotest_common.sh@640 -- # local es=0 00:22:32.141 15:59:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:22:32.141 15:59:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:32.141 15:59:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:32.141 15:59:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:32.141 15:59:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:32.141 15:59:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:32.141 15:59:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:32.141 15:59:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:32.141 15:59:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:32.141 15:59:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:22:32.141 [2024-07-22 15:59:35.000229] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:32.141 [2024-07-22 15:59:35.000319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58079 ] 00:22:32.399 [2024-07-22 15:59:35.131254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.399 [2024-07-22 15:59:35.207225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.399 [2024-07-22 15:59:35.254719] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:22:32.399 [2024-07-22 15:59:35.254791] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:22:32.399 [2024-07-22 15:59:35.254817] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:32.657 [2024-07-22 15:59:35.322089] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:32.657 15:59:35 -- common/autotest_common.sh@643 -- # es=216 00:22:32.657 15:59:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:32.657 15:59:35 -- common/autotest_common.sh@652 -- # es=88 00:22:32.657 15:59:35 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:32.657 15:59:35 -- common/autotest_common.sh@660 -- # es=1 00:22:32.657 15:59:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:32.657 15:59:35 -- dd/posix.sh@46 -- # gen_bytes 512 00:22:32.657 15:59:35 -- dd/common.sh@98 -- # xtrace_disable 00:22:32.657 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:22:32.657 15:59:35 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:32.657 [2024-07-22 15:59:35.484534] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:32.657 [2024-07-22 15:59:35.484627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58081 ] 00:22:32.915 [2024-07-22 15:59:35.616851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.915 [2024-07-22 15:59:35.690869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.173  Copying: 512/512 [B] (average 500 kBps) 00:22:33.173 00:22:33.173 15:59:35 -- dd/posix.sh@49 -- # [[ rpaao239ja2d0t8vcvingtbsz8z1dal8j9ui3d9n4v7ktezk2xhttxurc4twvwydgxwaeh9u7o3hteqg9fbqxqm22tulr8ejohvmy1hpsquekjkc4t57l8vcjajm81v6mj6di3mymwtill5j0onwi1viaa053c9yeq3p8pf18p4xyzvcrrkt0ma6ywecn4qrg9alkum8ff5wypo3d0lifvmyfsmofvaxjd7ma6zzv8beiax8zoy6z8v3i9lwl1fc9igkgzl3rg2n3hf9s0habg2qvk6lg9mjw2bg2a1eqxq5sw175qmqlk9rdato7s2zv0forz02cbn0uby8tihrnbhm745yvggqmp5m16zyq81vcmu2pccduhjcbt6k0rpo102acahppdspywszzx2v41bn917713qimuwg5coc2lkawy6xdde4fjpf3mgn8mxwkzoid2b6t47pxfuqez0iidwvscfjhcugj6nisrhdmv0ikqj2om72ij2rq3dwl3hi == \r\p\a\a\o\2\3\9\j\a\2\d\0\t\8\v\c\v\i\n\g\t\b\s\z\8\z\1\d\a\l\8\j\9\u\i\3\d\9\n\4\v\7\k\t\e\z\k\2\x\h\t\t\x\u\r\c\4\t\w\v\w\y\d\g\x\w\a\e\h\9\u\7\o\3\h\t\e\q\g\9\f\b\q\x\q\m\2\2\t\u\l\r\8\e\j\o\h\v\m\y\1\h\p\s\q\u\e\k\j\k\c\4\t\5\7\l\8\v\c\j\a\j\m\8\1\v\6\m\j\6\d\i\3\m\y\m\w\t\i\l\l\5\j\0\o\n\w\i\1\v\i\a\a\0\5\3\c\9\y\e\q\3\p\8\p\f\1\8\p\4\x\y\z\v\c\r\r\k\t\0\m\a\6\y\w\e\c\n\4\q\r\g\9\a\l\k\u\m\8\f\f\5\w\y\p\o\3\d\0\l\i\f\v\m\y\f\s\m\o\f\v\a\x\j\d\7\m\a\6\z\z\v\8\b\e\i\a\x\8\z\o\y\6\z\8\v\3\i\9\l\w\l\1\f\c\9\i\g\k\g\z\l\3\r\g\2\n\3\h\f\9\s\0\h\a\b\g\2\q\v\k\6\l\g\9\m\j\w\2\b\g\2\a\1\e\q\x\q\5\s\w\1\7\5\q\m\q\l\k\9\r\d\a\t\o\7\s\2\z\v\0\f\o\r\z\0\2\c\b\n\0\u\b\y\8\t\i\h\r\n\b\h\m\7\4\5\y\v\g\g\q\m\p\5\m\1\6\z\y\q\8\1\v\c\m\u\2\p\c\c\d\u\h\j\c\b\t\6\k\0\r\p\o\1\0\2\a\c\a\h\p\p\d\s\p\y\w\s\z\z\x\2\v\4\1\b\n\9\1\7\7\1\3\q\i\m\u\w\g\5\c\o\c\2\l\k\a\w\y\6\x\d\d\e\4\f\j\p\f\3\m\g\n\8\m\x\w\k\z\o\i\d\2\b\6\t\4\7\p\x\f\u\q\e\z\0\i\i\d\w\v\s\c\f\j\h\c\u\g\j\6\n\i\s\r\h\d\m\v\0\i\k\q\j\2\o\m\7\2\i\j\2\r\q\3\d\w\l\3\h\i ]] 00:22:33.173 00:22:33.173 real 0m1.475s 00:22:33.173 user 0m0.851s 00:22:33.173 sys 0m0.293s 00:22:33.173 15:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.173 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:22:33.173 ************************************ 00:22:33.173 END TEST dd_flag_nofollow 00:22:33.173 ************************************ 00:22:33.173 15:59:35 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:22:33.173 15:59:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:33.173 15:59:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:33.173 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:22:33.173 ************************************ 00:22:33.173 START TEST dd_flag_noatime 00:22:33.173 ************************************ 00:22:33.173 15:59:35 -- common/autotest_common.sh@1104 -- # noatime 00:22:33.173 15:59:35 -- dd/posix.sh@53 -- # local atime_if 00:22:33.173 15:59:35 -- dd/posix.sh@54 -- # local atime_of 00:22:33.173 15:59:35 -- dd/posix.sh@58 -- # gen_bytes 512 00:22:33.173 15:59:35 -- dd/common.sh@98 -- # xtrace_disable 00:22:33.173 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:22:33.173 15:59:35 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:33.173 15:59:35 -- dd/posix.sh@60 -- # atime_if=1721663975 00:22:33.174 15:59:35 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:33.174 15:59:35 -- dd/posix.sh@61 -- # atime_of=1721663975 00:22:33.174 15:59:35 -- dd/posix.sh@66 -- # sleep 1 00:22:34.547 15:59:36 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:34.547 [2024-07-22 15:59:37.054142] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:34.547 [2024-07-22 15:59:37.054272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58127 ] 00:22:34.547 [2024-07-22 15:59:37.193979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.547 [2024-07-22 15:59:37.257087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.805  Copying: 512/512 [B] (average 500 kBps) 00:22:34.805 00:22:34.805 15:59:37 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:34.805 15:59:37 -- dd/posix.sh@69 -- # (( atime_if == 1721663975 )) 00:22:34.805 15:59:37 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:34.805 15:59:37 -- dd/posix.sh@70 -- # (( atime_of == 1721663975 )) 00:22:34.805 15:59:37 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:34.805 [2024-07-22 15:59:37.563525] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:34.805 [2024-07-22 15:59:37.563617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58133 ] 00:22:35.062 [2024-07-22 15:59:37.698671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.062 [2024-07-22 15:59:37.756406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.320  Copying: 512/512 [B] (average 500 kBps) 00:22:35.320 00:22:35.320 15:59:38 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:35.320 15:59:38 -- dd/posix.sh@73 -- # (( atime_if < 1721663977 )) 00:22:35.320 00:22:35.320 real 0m2.056s 00:22:35.320 user 0m0.605s 00:22:35.320 sys 0m0.197s 00:22:35.320 15:59:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.320 ************************************ 00:22:35.320 END TEST dd_flag_noatime 00:22:35.321 ************************************ 00:22:35.321 15:59:38 -- common/autotest_common.sh@10 -- # set +x 00:22:35.321 15:59:38 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:22:35.321 15:59:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:35.321 15:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:35.321 15:59:38 -- common/autotest_common.sh@10 -- # set +x 00:22:35.321 ************************************ 00:22:35.321 START TEST dd_flags_misc 00:22:35.321 ************************************ 00:22:35.321 15:59:38 -- common/autotest_common.sh@1104 -- # io 00:22:35.321 15:59:38 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:22:35.321 15:59:38 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:22:35.321 15:59:38 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:22:35.321 15:59:38 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:22:35.321 15:59:38 -- dd/posix.sh@86 -- # gen_bytes 512 00:22:35.321 15:59:38 -- dd/common.sh@98 -- # xtrace_disable 00:22:35.321 15:59:38 -- common/autotest_common.sh@10 -- # set +x 00:22:35.321 15:59:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:35.321 15:59:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:22:35.321 [2024-07-22 15:59:38.125079] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:35.321 [2024-07-22 15:59:38.125198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58165 ] 00:22:35.578 [2024-07-22 15:59:38.258993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.578 [2024-07-22 15:59:38.318049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.837  Copying: 512/512 [B] (average 500 kBps) 00:22:35.837 00:22:35.837 15:59:38 -- dd/posix.sh@93 -- # [[ 5lz0z20ukif9p6gcvo1fuzwjq6hyqoeirv4jn0ejk9dcz65kb7ejxrmqrv9nvtbpjgor9j7lasly26e3wvupuhwlf5olf0ml7u3u6qtu1so0foidkydci12zm1ul5u8e6rlmsg777m2u2360fh3sy06luh42bnm9hx86lhc450szfhbtdzdc396umyahth6ic8obvu9o0h70zx3xuy2qd9uuj51k5h4y9fofjxyp2m8cmug4sl5d72awzr86ygvre8jsgfrhrwe4s541ydsgc9fzmeosdn1371qh4febm12vo4undw1bdnk1kggh8i7lp91phke5dxfz3selcy125z058l8bkav70d65vns0y4aziq4kg46eg6ybrx7yuyj8g5zl4k8mkyv8riydadw6vhqxrpz7ycc8n4e7r0o9myhgkfrdr78m1bcobh8l5tncg9655mlsoat48j9trncntc0s2sopodyvd0u5f4rcrdqpi6c29vvx6f289isdluxn == \5\l\z\0\z\2\0\u\k\i\f\9\p\6\g\c\v\o\1\f\u\z\w\j\q\6\h\y\q\o\e\i\r\v\4\j\n\0\e\j\k\9\d\c\z\6\5\k\b\7\e\j\x\r\m\q\r\v\9\n\v\t\b\p\j\g\o\r\9\j\7\l\a\s\l\y\2\6\e\3\w\v\u\p\u\h\w\l\f\5\o\l\f\0\m\l\7\u\3\u\6\q\t\u\1\s\o\0\f\o\i\d\k\y\d\c\i\1\2\z\m\1\u\l\5\u\8\e\6\r\l\m\s\g\7\7\7\m\2\u\2\3\6\0\f\h\3\s\y\0\6\l\u\h\4\2\b\n\m\9\h\x\8\6\l\h\c\4\5\0\s\z\f\h\b\t\d\z\d\c\3\9\6\u\m\y\a\h\t\h\6\i\c\8\o\b\v\u\9\o\0\h\7\0\z\x\3\x\u\y\2\q\d\9\u\u\j\5\1\k\5\h\4\y\9\f\o\f\j\x\y\p\2\m\8\c\m\u\g\4\s\l\5\d\7\2\a\w\z\r\8\6\y\g\v\r\e\8\j\s\g\f\r\h\r\w\e\4\s\5\4\1\y\d\s\g\c\9\f\z\m\e\o\s\d\n\1\3\7\1\q\h\4\f\e\b\m\1\2\v\o\4\u\n\d\w\1\b\d\n\k\1\k\g\g\h\8\i\7\l\p\9\1\p\h\k\e\5\d\x\f\z\3\s\e\l\c\y\1\2\5\z\0\5\8\l\8\b\k\a\v\7\0\d\6\5\v\n\s\0\y\4\a\z\i\q\4\k\g\4\6\e\g\6\y\b\r\x\7\y\u\y\j\8\g\5\z\l\4\k\8\m\k\y\v\8\r\i\y\d\a\d\w\6\v\h\q\x\r\p\z\7\y\c\c\8\n\4\e\7\r\0\o\9\m\y\h\g\k\f\r\d\r\7\8\m\1\b\c\o\b\h\8\l\5\t\n\c\g\9\6\5\5\m\l\s\o\a\t\4\8\j\9\t\r\n\c\n\t\c\0\s\2\s\o\p\o\d\y\v\d\0\u\5\f\4\r\c\r\d\q\p\i\6\c\2\9\v\v\x\6\f\2\8\9\i\s\d\l\u\x\n ]] 00:22:35.837 15:59:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:35.837 15:59:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:22:35.837 [2024-07-22 15:59:38.615712] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:35.837 [2024-07-22 15:59:38.615849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58167 ] 00:22:36.119 [2024-07-22 15:59:38.753338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.119 [2024-07-22 15:59:38.837195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.398  Copying: 512/512 [B] (average 500 kBps) 00:22:36.398 00:22:36.398 15:59:39 -- dd/posix.sh@93 -- # [[ 5lz0z20ukif9p6gcvo1fuzwjq6hyqoeirv4jn0ejk9dcz65kb7ejxrmqrv9nvtbpjgor9j7lasly26e3wvupuhwlf5olf0ml7u3u6qtu1so0foidkydci12zm1ul5u8e6rlmsg777m2u2360fh3sy06luh42bnm9hx86lhc450szfhbtdzdc396umyahth6ic8obvu9o0h70zx3xuy2qd9uuj51k5h4y9fofjxyp2m8cmug4sl5d72awzr86ygvre8jsgfrhrwe4s541ydsgc9fzmeosdn1371qh4febm12vo4undw1bdnk1kggh8i7lp91phke5dxfz3selcy125z058l8bkav70d65vns0y4aziq4kg46eg6ybrx7yuyj8g5zl4k8mkyv8riydadw6vhqxrpz7ycc8n4e7r0o9myhgkfrdr78m1bcobh8l5tncg9655mlsoat48j9trncntc0s2sopodyvd0u5f4rcrdqpi6c29vvx6f289isdluxn == \5\l\z\0\z\2\0\u\k\i\f\9\p\6\g\c\v\o\1\f\u\z\w\j\q\6\h\y\q\o\e\i\r\v\4\j\n\0\e\j\k\9\d\c\z\6\5\k\b\7\e\j\x\r\m\q\r\v\9\n\v\t\b\p\j\g\o\r\9\j\7\l\a\s\l\y\2\6\e\3\w\v\u\p\u\h\w\l\f\5\o\l\f\0\m\l\7\u\3\u\6\q\t\u\1\s\o\0\f\o\i\d\k\y\d\c\i\1\2\z\m\1\u\l\5\u\8\e\6\r\l\m\s\g\7\7\7\m\2\u\2\3\6\0\f\h\3\s\y\0\6\l\u\h\4\2\b\n\m\9\h\x\8\6\l\h\c\4\5\0\s\z\f\h\b\t\d\z\d\c\3\9\6\u\m\y\a\h\t\h\6\i\c\8\o\b\v\u\9\o\0\h\7\0\z\x\3\x\u\y\2\q\d\9\u\u\j\5\1\k\5\h\4\y\9\f\o\f\j\x\y\p\2\m\8\c\m\u\g\4\s\l\5\d\7\2\a\w\z\r\8\6\y\g\v\r\e\8\j\s\g\f\r\h\r\w\e\4\s\5\4\1\y\d\s\g\c\9\f\z\m\e\o\s\d\n\1\3\7\1\q\h\4\f\e\b\m\1\2\v\o\4\u\n\d\w\1\b\d\n\k\1\k\g\g\h\8\i\7\l\p\9\1\p\h\k\e\5\d\x\f\z\3\s\e\l\c\y\1\2\5\z\0\5\8\l\8\b\k\a\v\7\0\d\6\5\v\n\s\0\y\4\a\z\i\q\4\k\g\4\6\e\g\6\y\b\r\x\7\y\u\y\j\8\g\5\z\l\4\k\8\m\k\y\v\8\r\i\y\d\a\d\w\6\v\h\q\x\r\p\z\7\y\c\c\8\n\4\e\7\r\0\o\9\m\y\h\g\k\f\r\d\r\7\8\m\1\b\c\o\b\h\8\l\5\t\n\c\g\9\6\5\5\m\l\s\o\a\t\4\8\j\9\t\r\n\c\n\t\c\0\s\2\s\o\p\o\d\y\v\d\0\u\5\f\4\r\c\r\d\q\p\i\6\c\2\9\v\v\x\6\f\2\8\9\i\s\d\l\u\x\n ]] 00:22:36.398 15:59:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:36.398 15:59:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:22:36.398 [2024-07-22 15:59:39.135327] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:36.399 [2024-07-22 15:59:39.135462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58180 ] 00:22:36.656 [2024-07-22 15:59:39.273539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.656 [2024-07-22 15:59:39.346255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.915  Copying: 512/512 [B] (average 250 kBps) 00:22:36.915 00:22:36.915 15:59:39 -- dd/posix.sh@93 -- # [[ 5lz0z20ukif9p6gcvo1fuzwjq6hyqoeirv4jn0ejk9dcz65kb7ejxrmqrv9nvtbpjgor9j7lasly26e3wvupuhwlf5olf0ml7u3u6qtu1so0foidkydci12zm1ul5u8e6rlmsg777m2u2360fh3sy06luh42bnm9hx86lhc450szfhbtdzdc396umyahth6ic8obvu9o0h70zx3xuy2qd9uuj51k5h4y9fofjxyp2m8cmug4sl5d72awzr86ygvre8jsgfrhrwe4s541ydsgc9fzmeosdn1371qh4febm12vo4undw1bdnk1kggh8i7lp91phke5dxfz3selcy125z058l8bkav70d65vns0y4aziq4kg46eg6ybrx7yuyj8g5zl4k8mkyv8riydadw6vhqxrpz7ycc8n4e7r0o9myhgkfrdr78m1bcobh8l5tncg9655mlsoat48j9trncntc0s2sopodyvd0u5f4rcrdqpi6c29vvx6f289isdluxn == \5\l\z\0\z\2\0\u\k\i\f\9\p\6\g\c\v\o\1\f\u\z\w\j\q\6\h\y\q\o\e\i\r\v\4\j\n\0\e\j\k\9\d\c\z\6\5\k\b\7\e\j\x\r\m\q\r\v\9\n\v\t\b\p\j\g\o\r\9\j\7\l\a\s\l\y\2\6\e\3\w\v\u\p\u\h\w\l\f\5\o\l\f\0\m\l\7\u\3\u\6\q\t\u\1\s\o\0\f\o\i\d\k\y\d\c\i\1\2\z\m\1\u\l\5\u\8\e\6\r\l\m\s\g\7\7\7\m\2\u\2\3\6\0\f\h\3\s\y\0\6\l\u\h\4\2\b\n\m\9\h\x\8\6\l\h\c\4\5\0\s\z\f\h\b\t\d\z\d\c\3\9\6\u\m\y\a\h\t\h\6\i\c\8\o\b\v\u\9\o\0\h\7\0\z\x\3\x\u\y\2\q\d\9\u\u\j\5\1\k\5\h\4\y\9\f\o\f\j\x\y\p\2\m\8\c\m\u\g\4\s\l\5\d\7\2\a\w\z\r\8\6\y\g\v\r\e\8\j\s\g\f\r\h\r\w\e\4\s\5\4\1\y\d\s\g\c\9\f\z\m\e\o\s\d\n\1\3\7\1\q\h\4\f\e\b\m\1\2\v\o\4\u\n\d\w\1\b\d\n\k\1\k\g\g\h\8\i\7\l\p\9\1\p\h\k\e\5\d\x\f\z\3\s\e\l\c\y\1\2\5\z\0\5\8\l\8\b\k\a\v\7\0\d\6\5\v\n\s\0\y\4\a\z\i\q\4\k\g\4\6\e\g\6\y\b\r\x\7\y\u\y\j\8\g\5\z\l\4\k\8\m\k\y\v\8\r\i\y\d\a\d\w\6\v\h\q\x\r\p\z\7\y\c\c\8\n\4\e\7\r\0\o\9\m\y\h\g\k\f\r\d\r\7\8\m\1\b\c\o\b\h\8\l\5\t\n\c\g\9\6\5\5\m\l\s\o\a\t\4\8\j\9\t\r\n\c\n\t\c\0\s\2\s\o\p\o\d\y\v\d\0\u\5\f\4\r\c\r\d\q\p\i\6\c\2\9\v\v\x\6\f\2\8\9\i\s\d\l\u\x\n ]] 00:22:36.915 15:59:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:36.915 15:59:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:22:36.915 [2024-07-22 15:59:39.649358] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:36.915 [2024-07-22 15:59:39.649510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58182 ] 00:22:37.173 [2024-07-22 15:59:39.789480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.173 [2024-07-22 15:59:39.859634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.431  Copying: 512/512 [B] (average 500 kBps) 00:22:37.431 00:22:37.431 15:59:40 -- dd/posix.sh@93 -- # [[ 5lz0z20ukif9p6gcvo1fuzwjq6hyqoeirv4jn0ejk9dcz65kb7ejxrmqrv9nvtbpjgor9j7lasly26e3wvupuhwlf5olf0ml7u3u6qtu1so0foidkydci12zm1ul5u8e6rlmsg777m2u2360fh3sy06luh42bnm9hx86lhc450szfhbtdzdc396umyahth6ic8obvu9o0h70zx3xuy2qd9uuj51k5h4y9fofjxyp2m8cmug4sl5d72awzr86ygvre8jsgfrhrwe4s541ydsgc9fzmeosdn1371qh4febm12vo4undw1bdnk1kggh8i7lp91phke5dxfz3selcy125z058l8bkav70d65vns0y4aziq4kg46eg6ybrx7yuyj8g5zl4k8mkyv8riydadw6vhqxrpz7ycc8n4e7r0o9myhgkfrdr78m1bcobh8l5tncg9655mlsoat48j9trncntc0s2sopodyvd0u5f4rcrdqpi6c29vvx6f289isdluxn == \5\l\z\0\z\2\0\u\k\i\f\9\p\6\g\c\v\o\1\f\u\z\w\j\q\6\h\y\q\o\e\i\r\v\4\j\n\0\e\j\k\9\d\c\z\6\5\k\b\7\e\j\x\r\m\q\r\v\9\n\v\t\b\p\j\g\o\r\9\j\7\l\a\s\l\y\2\6\e\3\w\v\u\p\u\h\w\l\f\5\o\l\f\0\m\l\7\u\3\u\6\q\t\u\1\s\o\0\f\o\i\d\k\y\d\c\i\1\2\z\m\1\u\l\5\u\8\e\6\r\l\m\s\g\7\7\7\m\2\u\2\3\6\0\f\h\3\s\y\0\6\l\u\h\4\2\b\n\m\9\h\x\8\6\l\h\c\4\5\0\s\z\f\h\b\t\d\z\d\c\3\9\6\u\m\y\a\h\t\h\6\i\c\8\o\b\v\u\9\o\0\h\7\0\z\x\3\x\u\y\2\q\d\9\u\u\j\5\1\k\5\h\4\y\9\f\o\f\j\x\y\p\2\m\8\c\m\u\g\4\s\l\5\d\7\2\a\w\z\r\8\6\y\g\v\r\e\8\j\s\g\f\r\h\r\w\e\4\s\5\4\1\y\d\s\g\c\9\f\z\m\e\o\s\d\n\1\3\7\1\q\h\4\f\e\b\m\1\2\v\o\4\u\n\d\w\1\b\d\n\k\1\k\g\g\h\8\i\7\l\p\9\1\p\h\k\e\5\d\x\f\z\3\s\e\l\c\y\1\2\5\z\0\5\8\l\8\b\k\a\v\7\0\d\6\5\v\n\s\0\y\4\a\z\i\q\4\k\g\4\6\e\g\6\y\b\r\x\7\y\u\y\j\8\g\5\z\l\4\k\8\m\k\y\v\8\r\i\y\d\a\d\w\6\v\h\q\x\r\p\z\7\y\c\c\8\n\4\e\7\r\0\o\9\m\y\h\g\k\f\r\d\r\7\8\m\1\b\c\o\b\h\8\l\5\t\n\c\g\9\6\5\5\m\l\s\o\a\t\4\8\j\9\t\r\n\c\n\t\c\0\s\2\s\o\p\o\d\y\v\d\0\u\5\f\4\r\c\r\d\q\p\i\6\c\2\9\v\v\x\6\f\2\8\9\i\s\d\l\u\x\n ]] 00:22:37.431 15:59:40 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:22:37.431 15:59:40 -- dd/posix.sh@86 -- # gen_bytes 512 00:22:37.431 15:59:40 -- dd/common.sh@98 -- # xtrace_disable 00:22:37.431 15:59:40 -- common/autotest_common.sh@10 -- # set +x 00:22:37.431 15:59:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:37.431 15:59:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:22:37.431 [2024-07-22 15:59:40.181015] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:37.431 [2024-07-22 15:59:40.181146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58195 ] 00:22:37.689 [2024-07-22 15:59:40.319285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.689 [2024-07-22 15:59:40.401430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.947  Copying: 512/512 [B] (average 500 kBps) 00:22:37.947 00:22:37.947 15:59:40 -- dd/posix.sh@93 -- # [[ ma7u9xgd746w1mtf4lq8zu0aitv0h8mx7skj2l76ats7qnd7o821qqq6sxr5wxmpvjl9fkypewrnvu74xjmyncv5czuvow3rm19qz9op83gs55y1lzsgj56ol621vbz7jbpv1nl1eyai23v1epzuhed9ps5gijw5gm6iz8xlmtuhw9oz7p1jd3lkbq6d8he4m2e0n3y011cfjy55ooixlbetat76epbuy9u8hdrhtmn9i9kfggmmrnvln16ckwxtt4e1nqdwzsmrpgxh6vqfilojcd8rq4jwrjnjsj7yfnilulvatxgi9izbi3xfqqntt2xxb0jujuqp5e2bda95buu3qtmtfehchgli3tifod9h12i9zunfi2iof4cub5vel55mrk6cc7gl52i14zlzl801xf625p9xhbfo3dhkwxirkkal7y8z7hnax0gtslbjk1nvvjbzevv7c4msozybru05bonshlzbmp8727hk0tx2w0337su3l312z4hvgljv == \m\a\7\u\9\x\g\d\7\4\6\w\1\m\t\f\4\l\q\8\z\u\0\a\i\t\v\0\h\8\m\x\7\s\k\j\2\l\7\6\a\t\s\7\q\n\d\7\o\8\2\1\q\q\q\6\s\x\r\5\w\x\m\p\v\j\l\9\f\k\y\p\e\w\r\n\v\u\7\4\x\j\m\y\n\c\v\5\c\z\u\v\o\w\3\r\m\1\9\q\z\9\o\p\8\3\g\s\5\5\y\1\l\z\s\g\j\5\6\o\l\6\2\1\v\b\z\7\j\b\p\v\1\n\l\1\e\y\a\i\2\3\v\1\e\p\z\u\h\e\d\9\p\s\5\g\i\j\w\5\g\m\6\i\z\8\x\l\m\t\u\h\w\9\o\z\7\p\1\j\d\3\l\k\b\q\6\d\8\h\e\4\m\2\e\0\n\3\y\0\1\1\c\f\j\y\5\5\o\o\i\x\l\b\e\t\a\t\7\6\e\p\b\u\y\9\u\8\h\d\r\h\t\m\n\9\i\9\k\f\g\g\m\m\r\n\v\l\n\1\6\c\k\w\x\t\t\4\e\1\n\q\d\w\z\s\m\r\p\g\x\h\6\v\q\f\i\l\o\j\c\d\8\r\q\4\j\w\r\j\n\j\s\j\7\y\f\n\i\l\u\l\v\a\t\x\g\i\9\i\z\b\i\3\x\f\q\q\n\t\t\2\x\x\b\0\j\u\j\u\q\p\5\e\2\b\d\a\9\5\b\u\u\3\q\t\m\t\f\e\h\c\h\g\l\i\3\t\i\f\o\d\9\h\1\2\i\9\z\u\n\f\i\2\i\o\f\4\c\u\b\5\v\e\l\5\5\m\r\k\6\c\c\7\g\l\5\2\i\1\4\z\l\z\l\8\0\1\x\f\6\2\5\p\9\x\h\b\f\o\3\d\h\k\w\x\i\r\k\k\a\l\7\y\8\z\7\h\n\a\x\0\g\t\s\l\b\j\k\1\n\v\v\j\b\z\e\v\v\7\c\4\m\s\o\z\y\b\r\u\0\5\b\o\n\s\h\l\z\b\m\p\8\7\2\7\h\k\0\t\x\2\w\0\3\3\7\s\u\3\l\3\1\2\z\4\h\v\g\l\j\v ]] 00:22:37.947 15:59:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:37.947 15:59:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:22:37.947 [2024-07-22 15:59:40.714358] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:37.947 [2024-07-22 15:59:40.714452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58197 ] 00:22:38.211 [2024-07-22 15:59:40.843289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.211 [2024-07-22 15:59:40.901660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.474  Copying: 512/512 [B] (average 500 kBps) 00:22:38.474 00:22:38.474 15:59:41 -- dd/posix.sh@93 -- # [[ ma7u9xgd746w1mtf4lq8zu0aitv0h8mx7skj2l76ats7qnd7o821qqq6sxr5wxmpvjl9fkypewrnvu74xjmyncv5czuvow3rm19qz9op83gs55y1lzsgj56ol621vbz7jbpv1nl1eyai23v1epzuhed9ps5gijw5gm6iz8xlmtuhw9oz7p1jd3lkbq6d8he4m2e0n3y011cfjy55ooixlbetat76epbuy9u8hdrhtmn9i9kfggmmrnvln16ckwxtt4e1nqdwzsmrpgxh6vqfilojcd8rq4jwrjnjsj7yfnilulvatxgi9izbi3xfqqntt2xxb0jujuqp5e2bda95buu3qtmtfehchgli3tifod9h12i9zunfi2iof4cub5vel55mrk6cc7gl52i14zlzl801xf625p9xhbfo3dhkwxirkkal7y8z7hnax0gtslbjk1nvvjbzevv7c4msozybru05bonshlzbmp8727hk0tx2w0337su3l312z4hvgljv == \m\a\7\u\9\x\g\d\7\4\6\w\1\m\t\f\4\l\q\8\z\u\0\a\i\t\v\0\h\8\m\x\7\s\k\j\2\l\7\6\a\t\s\7\q\n\d\7\o\8\2\1\q\q\q\6\s\x\r\5\w\x\m\p\v\j\l\9\f\k\y\p\e\w\r\n\v\u\7\4\x\j\m\y\n\c\v\5\c\z\u\v\o\w\3\r\m\1\9\q\z\9\o\p\8\3\g\s\5\5\y\1\l\z\s\g\j\5\6\o\l\6\2\1\v\b\z\7\j\b\p\v\1\n\l\1\e\y\a\i\2\3\v\1\e\p\z\u\h\e\d\9\p\s\5\g\i\j\w\5\g\m\6\i\z\8\x\l\m\t\u\h\w\9\o\z\7\p\1\j\d\3\l\k\b\q\6\d\8\h\e\4\m\2\e\0\n\3\y\0\1\1\c\f\j\y\5\5\o\o\i\x\l\b\e\t\a\t\7\6\e\p\b\u\y\9\u\8\h\d\r\h\t\m\n\9\i\9\k\f\g\g\m\m\r\n\v\l\n\1\6\c\k\w\x\t\t\4\e\1\n\q\d\w\z\s\m\r\p\g\x\h\6\v\q\f\i\l\o\j\c\d\8\r\q\4\j\w\r\j\n\j\s\j\7\y\f\n\i\l\u\l\v\a\t\x\g\i\9\i\z\b\i\3\x\f\q\q\n\t\t\2\x\x\b\0\j\u\j\u\q\p\5\e\2\b\d\a\9\5\b\u\u\3\q\t\m\t\f\e\h\c\h\g\l\i\3\t\i\f\o\d\9\h\1\2\i\9\z\u\n\f\i\2\i\o\f\4\c\u\b\5\v\e\l\5\5\m\r\k\6\c\c\7\g\l\5\2\i\1\4\z\l\z\l\8\0\1\x\f\6\2\5\p\9\x\h\b\f\o\3\d\h\k\w\x\i\r\k\k\a\l\7\y\8\z\7\h\n\a\x\0\g\t\s\l\b\j\k\1\n\v\v\j\b\z\e\v\v\7\c\4\m\s\o\z\y\b\r\u\0\5\b\o\n\s\h\l\z\b\m\p\8\7\2\7\h\k\0\t\x\2\w\0\3\3\7\s\u\3\l\3\1\2\z\4\h\v\g\l\j\v ]] 00:22:38.474 15:59:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:38.474 15:59:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:22:38.474 [2024-07-22 15:59:41.227323] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:38.474 [2024-07-22 15:59:41.227454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58210 ] 00:22:38.732 [2024-07-22 15:59:41.365984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.732 [2024-07-22 15:59:41.424176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.991  Copying: 512/512 [B] (average 500 kBps) 00:22:38.991 00:22:38.991 15:59:41 -- dd/posix.sh@93 -- # [[ ma7u9xgd746w1mtf4lq8zu0aitv0h8mx7skj2l76ats7qnd7o821qqq6sxr5wxmpvjl9fkypewrnvu74xjmyncv5czuvow3rm19qz9op83gs55y1lzsgj56ol621vbz7jbpv1nl1eyai23v1epzuhed9ps5gijw5gm6iz8xlmtuhw9oz7p1jd3lkbq6d8he4m2e0n3y011cfjy55ooixlbetat76epbuy9u8hdrhtmn9i9kfggmmrnvln16ckwxtt4e1nqdwzsmrpgxh6vqfilojcd8rq4jwrjnjsj7yfnilulvatxgi9izbi3xfqqntt2xxb0jujuqp5e2bda95buu3qtmtfehchgli3tifod9h12i9zunfi2iof4cub5vel55mrk6cc7gl52i14zlzl801xf625p9xhbfo3dhkwxirkkal7y8z7hnax0gtslbjk1nvvjbzevv7c4msozybru05bonshlzbmp8727hk0tx2w0337su3l312z4hvgljv == \m\a\7\u\9\x\g\d\7\4\6\w\1\m\t\f\4\l\q\8\z\u\0\a\i\t\v\0\h\8\m\x\7\s\k\j\2\l\7\6\a\t\s\7\q\n\d\7\o\8\2\1\q\q\q\6\s\x\r\5\w\x\m\p\v\j\l\9\f\k\y\p\e\w\r\n\v\u\7\4\x\j\m\y\n\c\v\5\c\z\u\v\o\w\3\r\m\1\9\q\z\9\o\p\8\3\g\s\5\5\y\1\l\z\s\g\j\5\6\o\l\6\2\1\v\b\z\7\j\b\p\v\1\n\l\1\e\y\a\i\2\3\v\1\e\p\z\u\h\e\d\9\p\s\5\g\i\j\w\5\g\m\6\i\z\8\x\l\m\t\u\h\w\9\o\z\7\p\1\j\d\3\l\k\b\q\6\d\8\h\e\4\m\2\e\0\n\3\y\0\1\1\c\f\j\y\5\5\o\o\i\x\l\b\e\t\a\t\7\6\e\p\b\u\y\9\u\8\h\d\r\h\t\m\n\9\i\9\k\f\g\g\m\m\r\n\v\l\n\1\6\c\k\w\x\t\t\4\e\1\n\q\d\w\z\s\m\r\p\g\x\h\6\v\q\f\i\l\o\j\c\d\8\r\q\4\j\w\r\j\n\j\s\j\7\y\f\n\i\l\u\l\v\a\t\x\g\i\9\i\z\b\i\3\x\f\q\q\n\t\t\2\x\x\b\0\j\u\j\u\q\p\5\e\2\b\d\a\9\5\b\u\u\3\q\t\m\t\f\e\h\c\h\g\l\i\3\t\i\f\o\d\9\h\1\2\i\9\z\u\n\f\i\2\i\o\f\4\c\u\b\5\v\e\l\5\5\m\r\k\6\c\c\7\g\l\5\2\i\1\4\z\l\z\l\8\0\1\x\f\6\2\5\p\9\x\h\b\f\o\3\d\h\k\w\x\i\r\k\k\a\l\7\y\8\z\7\h\n\a\x\0\g\t\s\l\b\j\k\1\n\v\v\j\b\z\e\v\v\7\c\4\m\s\o\z\y\b\r\u\0\5\b\o\n\s\h\l\z\b\m\p\8\7\2\7\h\k\0\t\x\2\w\0\3\3\7\s\u\3\l\3\1\2\z\4\h\v\g\l\j\v ]] 00:22:38.991 15:59:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:38.991 15:59:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:22:38.991 [2024-07-22 15:59:41.730799] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:38.991 [2024-07-22 15:59:41.730955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58212 ] 00:22:39.249 [2024-07-22 15:59:41.869300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.249 [2024-07-22 15:59:41.952508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.507  Copying: 512/512 [B] (average 500 kBps) 00:22:39.507 00:22:39.507 15:59:42 -- dd/posix.sh@93 -- # [[ ma7u9xgd746w1mtf4lq8zu0aitv0h8mx7skj2l76ats7qnd7o821qqq6sxr5wxmpvjl9fkypewrnvu74xjmyncv5czuvow3rm19qz9op83gs55y1lzsgj56ol621vbz7jbpv1nl1eyai23v1epzuhed9ps5gijw5gm6iz8xlmtuhw9oz7p1jd3lkbq6d8he4m2e0n3y011cfjy55ooixlbetat76epbuy9u8hdrhtmn9i9kfggmmrnvln16ckwxtt4e1nqdwzsmrpgxh6vqfilojcd8rq4jwrjnjsj7yfnilulvatxgi9izbi3xfqqntt2xxb0jujuqp5e2bda95buu3qtmtfehchgli3tifod9h12i9zunfi2iof4cub5vel55mrk6cc7gl52i14zlzl801xf625p9xhbfo3dhkwxirkkal7y8z7hnax0gtslbjk1nvvjbzevv7c4msozybru05bonshlzbmp8727hk0tx2w0337su3l312z4hvgljv == \m\a\7\u\9\x\g\d\7\4\6\w\1\m\t\f\4\l\q\8\z\u\0\a\i\t\v\0\h\8\m\x\7\s\k\j\2\l\7\6\a\t\s\7\q\n\d\7\o\8\2\1\q\q\q\6\s\x\r\5\w\x\m\p\v\j\l\9\f\k\y\p\e\w\r\n\v\u\7\4\x\j\m\y\n\c\v\5\c\z\u\v\o\w\3\r\m\1\9\q\z\9\o\p\8\3\g\s\5\5\y\1\l\z\s\g\j\5\6\o\l\6\2\1\v\b\z\7\j\b\p\v\1\n\l\1\e\y\a\i\2\3\v\1\e\p\z\u\h\e\d\9\p\s\5\g\i\j\w\5\g\m\6\i\z\8\x\l\m\t\u\h\w\9\o\z\7\p\1\j\d\3\l\k\b\q\6\d\8\h\e\4\m\2\e\0\n\3\y\0\1\1\c\f\j\y\5\5\o\o\i\x\l\b\e\t\a\t\7\6\e\p\b\u\y\9\u\8\h\d\r\h\t\m\n\9\i\9\k\f\g\g\m\m\r\n\v\l\n\1\6\c\k\w\x\t\t\4\e\1\n\q\d\w\z\s\m\r\p\g\x\h\6\v\q\f\i\l\o\j\c\d\8\r\q\4\j\w\r\j\n\j\s\j\7\y\f\n\i\l\u\l\v\a\t\x\g\i\9\i\z\b\i\3\x\f\q\q\n\t\t\2\x\x\b\0\j\u\j\u\q\p\5\e\2\b\d\a\9\5\b\u\u\3\q\t\m\t\f\e\h\c\h\g\l\i\3\t\i\f\o\d\9\h\1\2\i\9\z\u\n\f\i\2\i\o\f\4\c\u\b\5\v\e\l\5\5\m\r\k\6\c\c\7\g\l\5\2\i\1\4\z\l\z\l\8\0\1\x\f\6\2\5\p\9\x\h\b\f\o\3\d\h\k\w\x\i\r\k\k\a\l\7\y\8\z\7\h\n\a\x\0\g\t\s\l\b\j\k\1\n\v\v\j\b\z\e\v\v\7\c\4\m\s\o\z\y\b\r\u\0\5\b\o\n\s\h\l\z\b\m\p\8\7\2\7\h\k\0\t\x\2\w\0\3\3\7\s\u\3\l\3\1\2\z\4\h\v\g\l\j\v ]] 00:22:39.507 00:22:39.507 real 0m4.121s 00:22:39.507 user 0m2.337s 00:22:39.507 sys 0m0.785s 00:22:39.507 15:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.507 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:22:39.507 ************************************ 00:22:39.507 END TEST dd_flags_misc 00:22:39.507 ************************************ 00:22:39.507 15:59:42 -- dd/posix.sh@131 -- # tests_forced_aio 00:22:39.507 15:59:42 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:22:39.507 * Second test run, disabling liburing, forcing AIO 00:22:39.507 15:59:42 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:22:39.507 15:59:42 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:22:39.507 15:59:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:39.507 15:59:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:39.507 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:22:39.507 ************************************ 00:22:39.507 START TEST dd_flag_append_forced_aio 00:22:39.507 ************************************ 00:22:39.507 15:59:42 -- common/autotest_common.sh@1104 -- # append 00:22:39.507 15:59:42 -- dd/posix.sh@16 -- # local dump0 00:22:39.507 15:59:42 -- dd/posix.sh@17 -- # local dump1 00:22:39.507 15:59:42 -- dd/posix.sh@19 -- # gen_bytes 32 00:22:39.507 15:59:42 -- dd/common.sh@98 -- # xtrace_disable 00:22:39.507 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:22:39.507 15:59:42 -- dd/posix.sh@19 -- # dump0=yuuzv848utxaq2w3hkz3mtth1scydf76 00:22:39.507 15:59:42 -- dd/posix.sh@20 -- # gen_bytes 32 00:22:39.507 15:59:42 -- dd/common.sh@98 -- # xtrace_disable 00:22:39.507 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:22:39.507 15:59:42 -- dd/posix.sh@20 -- # dump1=sy24k39y43pzu0lbvpk1zqzkm3d12zy2 00:22:39.507 15:59:42 -- dd/posix.sh@22 -- # printf %s yuuzv848utxaq2w3hkz3mtth1scydf76 00:22:39.507 15:59:42 -- dd/posix.sh@23 -- # printf %s sy24k39y43pzu0lbvpk1zqzkm3d12zy2 00:22:39.507 15:59:42 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:22:39.507 [2024-07-22 15:59:42.298600] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:39.507 [2024-07-22 15:59:42.298727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58244 ] 00:22:39.765 [2024-07-22 15:59:42.437167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.765 [2024-07-22 15:59:42.495061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.023  Copying: 32/32 [B] (average 31 kBps) 00:22:40.023 00:22:40.023 15:59:42 -- dd/posix.sh@27 -- # [[ sy24k39y43pzu0lbvpk1zqzkm3d12zy2yuuzv848utxaq2w3hkz3mtth1scydf76 == \s\y\2\4\k\3\9\y\4\3\p\z\u\0\l\b\v\p\k\1\z\q\z\k\m\3\d\1\2\z\y\2\y\u\u\z\v\8\4\8\u\t\x\a\q\2\w\3\h\k\z\3\m\t\t\h\1\s\c\y\d\f\7\6 ]] 00:22:40.023 00:22:40.023 real 0m0.503s 00:22:40.023 user 0m0.289s 00:22:40.023 sys 0m0.092s 00:22:40.023 15:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.023 ************************************ 00:22:40.023 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:22:40.023 END TEST dd_flag_append_forced_aio 00:22:40.023 ************************************ 00:22:40.023 15:59:42 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:22:40.023 15:59:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:40.023 15:59:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:40.023 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:22:40.023 ************************************ 00:22:40.023 START TEST dd_flag_directory_forced_aio 00:22:40.023 ************************************ 00:22:40.023 15:59:42 -- common/autotest_common.sh@1104 -- # directory 00:22:40.023 15:59:42 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:40.023 15:59:42 -- common/autotest_common.sh@640 -- # local es=0 00:22:40.023 15:59:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:40.023 15:59:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.023 15:59:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:40.024 15:59:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.024 15:59:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:40.024 15:59:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.024 15:59:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:40.024 15:59:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.024 15:59:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:40.024 15:59:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:40.024 [2024-07-22 15:59:42.835400] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:40.024 [2024-07-22 15:59:42.835517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58265 ] 00:22:40.290 [2024-07-22 15:59:42.966379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.290 [2024-07-22 15:59:43.030112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.290 [2024-07-22 15:59:43.076901] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:22:40.290 [2024-07-22 15:59:43.076963] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:22:40.290 [2024-07-22 15:59:43.076977] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:40.290 [2024-07-22 15:59:43.140880] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:40.552 15:59:43 -- common/autotest_common.sh@643 -- # es=236 00:22:40.552 15:59:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:40.552 15:59:43 -- common/autotest_common.sh@652 -- # es=108 00:22:40.552 15:59:43 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:40.552 15:59:43 -- common/autotest_common.sh@660 -- # es=1 00:22:40.552 15:59:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:40.552 15:59:43 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:22:40.552 15:59:43 -- common/autotest_common.sh@640 -- # local es=0 00:22:40.552 15:59:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:22:40.552 15:59:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.552 15:59:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:40.552 15:59:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.552 15:59:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:40.552 15:59:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.552 15:59:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:40.552 15:59:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:40.552 15:59:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:40.552 15:59:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:22:40.552 [2024-07-22 15:59:43.299083] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:40.552 [2024-07-22 15:59:43.299175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58280 ] 00:22:40.811 [2024-07-22 15:59:43.438341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.811 [2024-07-22 15:59:43.520149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.811 [2024-07-22 15:59:43.567711] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:22:40.811 [2024-07-22 15:59:43.567773] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:22:40.811 [2024-07-22 15:59:43.567789] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:40.811 [2024-07-22 15:59:43.633163] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:41.069 15:59:43 -- common/autotest_common.sh@643 -- # es=236 00:22:41.069 15:59:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:41.069 15:59:43 -- common/autotest_common.sh@652 -- # es=108 00:22:41.069 15:59:43 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:41.069 15:59:43 -- common/autotest_common.sh@660 -- # es=1 00:22:41.069 15:59:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:41.069 00:22:41.069 real 0m0.965s 00:22:41.069 user 0m0.571s 00:22:41.069 sys 0m0.185s 00:22:41.069 15:59:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.069 15:59:43 -- common/autotest_common.sh@10 -- # set +x 00:22:41.069 ************************************ 00:22:41.069 END TEST dd_flag_directory_forced_aio 00:22:41.069 ************************************ 00:22:41.069 15:59:43 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:22:41.069 15:59:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:41.069 15:59:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:41.069 15:59:43 -- common/autotest_common.sh@10 -- # set +x 00:22:41.069 ************************************ 00:22:41.069 START TEST dd_flag_nofollow_forced_aio 00:22:41.069 ************************************ 00:22:41.069 15:59:43 -- common/autotest_common.sh@1104 -- # nofollow 00:22:41.069 15:59:43 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:22:41.069 15:59:43 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:22:41.069 15:59:43 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:22:41.069 15:59:43 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:22:41.069 15:59:43 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:41.069 15:59:43 -- common/autotest_common.sh@640 -- # local es=0 00:22:41.069 15:59:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:41.069 15:59:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.069 15:59:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.069 15:59:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.069 15:59:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.069 15:59:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.069 15:59:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.069 15:59:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.069 15:59:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:41.069 15:59:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:41.069 [2024-07-22 15:59:43.851561] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:41.069 [2024-07-22 15:59:43.851688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58303 ] 00:22:41.327 [2024-07-22 15:59:43.998952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.327 [2024-07-22 15:59:44.081414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.327 [2024-07-22 15:59:44.132696] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:22:41.327 [2024-07-22 15:59:44.133008] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:22:41.327 [2024-07-22 15:59:44.133129] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:41.585 [2024-07-22 15:59:44.207807] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:41.585 15:59:44 -- common/autotest_common.sh@643 -- # es=216 00:22:41.585 15:59:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:41.585 15:59:44 -- common/autotest_common.sh@652 -- # es=88 00:22:41.585 15:59:44 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:41.585 15:59:44 -- common/autotest_common.sh@660 -- # es=1 00:22:41.585 15:59:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:41.586 15:59:44 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:22:41.586 15:59:44 -- common/autotest_common.sh@640 -- # local es=0 00:22:41.586 15:59:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:22:41.586 15:59:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.586 15:59:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.586 15:59:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.586 15:59:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.586 15:59:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.586 15:59:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:41.586 15:59:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.586 15:59:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:41.586 15:59:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:22:41.586 [2024-07-22 15:59:44.371966] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:41.586 [2024-07-22 15:59:44.372067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58318 ] 00:22:41.844 [2024-07-22 15:59:44.502286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.844 [2024-07-22 15:59:44.564650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.844 [2024-07-22 15:59:44.610103] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:22:41.844 [2024-07-22 15:59:44.610158] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:22:41.844 [2024-07-22 15:59:44.610173] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:41.844 [2024-07-22 15:59:44.674151] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:42.101 15:59:44 -- common/autotest_common.sh@643 -- # es=216 00:22:42.101 15:59:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:42.101 15:59:44 -- common/autotest_common.sh@652 -- # es=88 00:22:42.101 15:59:44 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:42.101 15:59:44 -- common/autotest_common.sh@660 -- # es=1 00:22:42.101 15:59:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:42.101 15:59:44 -- dd/posix.sh@46 -- # gen_bytes 512 00:22:42.101 15:59:44 -- dd/common.sh@98 -- # xtrace_disable 00:22:42.101 15:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:42.101 15:59:44 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:42.101 [2024-07-22 15:59:44.847899] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:42.101 [2024-07-22 15:59:44.848014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58320 ] 00:22:42.359 [2024-07-22 15:59:44.990175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.359 [2024-07-22 15:59:45.047955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.618  Copying: 512/512 [B] (average 500 kBps) 00:22:42.618 00:22:42.618 15:59:45 -- dd/posix.sh@49 -- # [[ s08n2e5di9pgg6l7v3fstkdzji7rp2a9ttzkmjbnxialw21zuqky5ddvf1n3pcfgdwe13hn9v9b4tfexosf3ilwv38d37ouf29nyb71ahyy0mur72yip6bfil3y2b7dpjdxt8iy9ptn6f5zwqzwvof7di1p57qpal29wiq55cyqtydft0z6aubo8u4ebc9vgkypl7ut4t90eom7ij5r3imm1ooi78h010gq5yn2n7ygaywgxwadxmoz2t2srlne85nv8zadb1ujwbs15ut8bk1cr6le8kk94yhsofbgj7ay9990myzu2xkyo8wl6gnktx54v3l05dkiaedawhe8nxer2v9pfp1fbtaaq6cgwp8wferusn5o8qlsag3iwfulucni2drevadhre87gj4hvru0dtuh7o05f7w3n80x9i8jjzt9utgqo73jiusgcsyguy7ndlzp18aueyoeyahpewci46t8srofbmrrsdkujq1uqk5irvkebr84w53io2k7w == \s\0\8\n\2\e\5\d\i\9\p\g\g\6\l\7\v\3\f\s\t\k\d\z\j\i\7\r\p\2\a\9\t\t\z\k\m\j\b\n\x\i\a\l\w\2\1\z\u\q\k\y\5\d\d\v\f\1\n\3\p\c\f\g\d\w\e\1\3\h\n\9\v\9\b\4\t\f\e\x\o\s\f\3\i\l\w\v\3\8\d\3\7\o\u\f\2\9\n\y\b\7\1\a\h\y\y\0\m\u\r\7\2\y\i\p\6\b\f\i\l\3\y\2\b\7\d\p\j\d\x\t\8\i\y\9\p\t\n\6\f\5\z\w\q\z\w\v\o\f\7\d\i\1\p\5\7\q\p\a\l\2\9\w\i\q\5\5\c\y\q\t\y\d\f\t\0\z\6\a\u\b\o\8\u\4\e\b\c\9\v\g\k\y\p\l\7\u\t\4\t\9\0\e\o\m\7\i\j\5\r\3\i\m\m\1\o\o\i\7\8\h\0\1\0\g\q\5\y\n\2\n\7\y\g\a\y\w\g\x\w\a\d\x\m\o\z\2\t\2\s\r\l\n\e\8\5\n\v\8\z\a\d\b\1\u\j\w\b\s\1\5\u\t\8\b\k\1\c\r\6\l\e\8\k\k\9\4\y\h\s\o\f\b\g\j\7\a\y\9\9\9\0\m\y\z\u\2\x\k\y\o\8\w\l\6\g\n\k\t\x\5\4\v\3\l\0\5\d\k\i\a\e\d\a\w\h\e\8\n\x\e\r\2\v\9\p\f\p\1\f\b\t\a\a\q\6\c\g\w\p\8\w\f\e\r\u\s\n\5\o\8\q\l\s\a\g\3\i\w\f\u\l\u\c\n\i\2\d\r\e\v\a\d\h\r\e\8\7\g\j\4\h\v\r\u\0\d\t\u\h\7\o\0\5\f\7\w\3\n\8\0\x\9\i\8\j\j\z\t\9\u\t\g\q\o\7\3\j\i\u\s\g\c\s\y\g\u\y\7\n\d\l\z\p\1\8\a\u\e\y\o\e\y\a\h\p\e\w\c\i\4\6\t\8\s\r\o\f\b\m\r\r\s\d\k\u\j\q\1\u\q\k\5\i\r\v\k\e\b\r\8\4\w\5\3\i\o\2\k\7\w ]] 00:22:42.618 00:22:42.618 real 0m1.540s 00:22:42.618 user 0m0.900s 00:22:42.618 sys 0m0.306s 00:22:42.618 15:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.618 ************************************ 00:22:42.618 END TEST dd_flag_nofollow_forced_aio 00:22:42.618 15:59:45 -- common/autotest_common.sh@10 -- # set +x 00:22:42.618 ************************************ 00:22:42.618 15:59:45 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:22:42.618 15:59:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:42.618 15:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:42.618 15:59:45 -- common/autotest_common.sh@10 -- # set +x 00:22:42.618 ************************************ 00:22:42.618 START TEST dd_flag_noatime_forced_aio 00:22:42.618 ************************************ 00:22:42.618 15:59:45 -- common/autotest_common.sh@1104 -- # noatime 00:22:42.618 15:59:45 -- dd/posix.sh@53 -- # local atime_if 00:22:42.618 15:59:45 -- dd/posix.sh@54 -- # local atime_of 00:22:42.618 15:59:45 -- dd/posix.sh@58 -- # gen_bytes 512 00:22:42.618 15:59:45 -- dd/common.sh@98 -- # xtrace_disable 00:22:42.618 15:59:45 -- common/autotest_common.sh@10 -- # set +x 00:22:42.618 15:59:45 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:42.618 15:59:45 -- dd/posix.sh@60 -- # atime_if=1721663985 00:22:42.618 15:59:45 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:42.618 15:59:45 -- dd/posix.sh@61 -- # atime_of=1721663985 00:22:42.618 15:59:45 -- dd/posix.sh@66 -- # sleep 1 00:22:43.552 15:59:46 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:43.810 [2024-07-22 15:59:46.433003] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:43.810 [2024-07-22 15:59:46.433086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58368 ] 00:22:43.810 [2024-07-22 15:59:46.564941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.810 [2024-07-22 15:59:46.646854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.068  Copying: 512/512 [B] (average 500 kBps) 00:22:44.068 00:22:44.068 15:59:46 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:44.068 15:59:46 -- dd/posix.sh@69 -- # (( atime_if == 1721663985 )) 00:22:44.068 15:59:46 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:44.068 15:59:46 -- dd/posix.sh@70 -- # (( atime_of == 1721663985 )) 00:22:44.068 15:59:46 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:44.326 [2024-07-22 15:59:46.953628] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:44.326 [2024-07-22 15:59:46.953743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58375 ] 00:22:44.326 [2024-07-22 15:59:47.084575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.326 [2024-07-22 15:59:47.166714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.584  Copying: 512/512 [B] (average 500 kBps) 00:22:44.584 00:22:44.584 15:59:47 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:44.584 15:59:47 -- dd/posix.sh@73 -- # (( atime_if < 1721663987 )) 00:22:44.584 00:22:44.584 real 0m2.075s 00:22:44.584 user 0m0.614s 00:22:44.584 sys 0m0.216s 00:22:44.584 15:59:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.584 15:59:47 -- common/autotest_common.sh@10 -- # set +x 00:22:44.584 ************************************ 00:22:44.584 END TEST dd_flag_noatime_forced_aio 00:22:44.584 ************************************ 00:22:44.842 15:59:47 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:22:44.842 15:59:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:44.842 15:59:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:44.842 15:59:47 -- common/autotest_common.sh@10 -- # set +x 00:22:44.842 ************************************ 00:22:44.842 START TEST dd_flags_misc_forced_aio 00:22:44.842 ************************************ 00:22:44.842 15:59:47 -- common/autotest_common.sh@1104 -- # io 00:22:44.842 15:59:47 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:22:44.842 15:59:47 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:22:44.842 15:59:47 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:22:44.842 15:59:47 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:22:44.842 15:59:47 -- dd/posix.sh@86 -- # gen_bytes 512 00:22:44.842 15:59:47 -- dd/common.sh@98 -- # xtrace_disable 00:22:44.842 15:59:47 -- common/autotest_common.sh@10 -- # set +x 00:22:44.842 15:59:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:44.842 15:59:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:22:44.842 [2024-07-22 15:59:47.545627] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:44.842 [2024-07-22 15:59:47.545752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58406 ] 00:22:44.842 [2024-07-22 15:59:47.683776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.100 [2024-07-22 15:59:47.754242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.359  Copying: 512/512 [B] (average 500 kBps) 00:22:45.359 00:22:45.359 15:59:48 -- dd/posix.sh@93 -- # [[ 6sza7hwsbycqloohyojbfrxu21gy3ruv3wgr4msw6qgykyewoyod8vi7eo3tspixf1oim2v7w5ax5zf39up78fjs5go13vvanj8tkaolz7vg8nra7pzz2k8hravmyl5zlbmmdqin20k495m1a1gyaq2ws8e41pd42xh1m9ts89ya278gx0m293urdqrv59qlpbhcj7eq4rvuu0e4ier5nqjzkxtbdbqj1b3gmys09ufur8mjmf22gg4x9qtqykh5w6yxc9opc6voiwail0xhpx50zxod8xwyndiw7h1ctu2cj6x5u0xdkiloy26cged1tmtfo29wvg0m9egbnkdu1uc03wlpdlvpcdq9zxvg0558aheb8quheao3c964oe3caxe6hxv0kwlsp0dvkxpwsigzeblke7wkvsi5pqxxz5la942u5qp2oamtiugw6ihh1puft6ijckr5s4y8ff6kcqbvxzc0y4zi9zmdiwc15by2pk0622bsm92lawgjda0p == \6\s\z\a\7\h\w\s\b\y\c\q\l\o\o\h\y\o\j\b\f\r\x\u\2\1\g\y\3\r\u\v\3\w\g\r\4\m\s\w\6\q\g\y\k\y\e\w\o\y\o\d\8\v\i\7\e\o\3\t\s\p\i\x\f\1\o\i\m\2\v\7\w\5\a\x\5\z\f\3\9\u\p\7\8\f\j\s\5\g\o\1\3\v\v\a\n\j\8\t\k\a\o\l\z\7\v\g\8\n\r\a\7\p\z\z\2\k\8\h\r\a\v\m\y\l\5\z\l\b\m\m\d\q\i\n\2\0\k\4\9\5\m\1\a\1\g\y\a\q\2\w\s\8\e\4\1\p\d\4\2\x\h\1\m\9\t\s\8\9\y\a\2\7\8\g\x\0\m\2\9\3\u\r\d\q\r\v\5\9\q\l\p\b\h\c\j\7\e\q\4\r\v\u\u\0\e\4\i\e\r\5\n\q\j\z\k\x\t\b\d\b\q\j\1\b\3\g\m\y\s\0\9\u\f\u\r\8\m\j\m\f\2\2\g\g\4\x\9\q\t\q\y\k\h\5\w\6\y\x\c\9\o\p\c\6\v\o\i\w\a\i\l\0\x\h\p\x\5\0\z\x\o\d\8\x\w\y\n\d\i\w\7\h\1\c\t\u\2\c\j\6\x\5\u\0\x\d\k\i\l\o\y\2\6\c\g\e\d\1\t\m\t\f\o\2\9\w\v\g\0\m\9\e\g\b\n\k\d\u\1\u\c\0\3\w\l\p\d\l\v\p\c\d\q\9\z\x\v\g\0\5\5\8\a\h\e\b\8\q\u\h\e\a\o\3\c\9\6\4\o\e\3\c\a\x\e\6\h\x\v\0\k\w\l\s\p\0\d\v\k\x\p\w\s\i\g\z\e\b\l\k\e\7\w\k\v\s\i\5\p\q\x\x\z\5\l\a\9\4\2\u\5\q\p\2\o\a\m\t\i\u\g\w\6\i\h\h\1\p\u\f\t\6\i\j\c\k\r\5\s\4\y\8\f\f\6\k\c\q\b\v\x\z\c\0\y\4\z\i\9\z\m\d\i\w\c\1\5\b\y\2\p\k\0\6\2\2\b\s\m\9\2\l\a\w\g\j\d\a\0\p ]] 00:22:45.359 15:59:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:45.359 15:59:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:22:45.359 [2024-07-22 15:59:48.047375] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:45.359 [2024-07-22 15:59:48.047463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58409 ] 00:22:45.359 [2024-07-22 15:59:48.177737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.617 [2024-07-22 15:59:48.262599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.875  Copying: 512/512 [B] (average 500 kBps) 00:22:45.875 00:22:45.875 15:59:48 -- dd/posix.sh@93 -- # [[ 6sza7hwsbycqloohyojbfrxu21gy3ruv3wgr4msw6qgykyewoyod8vi7eo3tspixf1oim2v7w5ax5zf39up78fjs5go13vvanj8tkaolz7vg8nra7pzz2k8hravmyl5zlbmmdqin20k495m1a1gyaq2ws8e41pd42xh1m9ts89ya278gx0m293urdqrv59qlpbhcj7eq4rvuu0e4ier5nqjzkxtbdbqj1b3gmys09ufur8mjmf22gg4x9qtqykh5w6yxc9opc6voiwail0xhpx50zxod8xwyndiw7h1ctu2cj6x5u0xdkiloy26cged1tmtfo29wvg0m9egbnkdu1uc03wlpdlvpcdq9zxvg0558aheb8quheao3c964oe3caxe6hxv0kwlsp0dvkxpwsigzeblke7wkvsi5pqxxz5la942u5qp2oamtiugw6ihh1puft6ijckr5s4y8ff6kcqbvxzc0y4zi9zmdiwc15by2pk0622bsm92lawgjda0p == \6\s\z\a\7\h\w\s\b\y\c\q\l\o\o\h\y\o\j\b\f\r\x\u\2\1\g\y\3\r\u\v\3\w\g\r\4\m\s\w\6\q\g\y\k\y\e\w\o\y\o\d\8\v\i\7\e\o\3\t\s\p\i\x\f\1\o\i\m\2\v\7\w\5\a\x\5\z\f\3\9\u\p\7\8\f\j\s\5\g\o\1\3\v\v\a\n\j\8\t\k\a\o\l\z\7\v\g\8\n\r\a\7\p\z\z\2\k\8\h\r\a\v\m\y\l\5\z\l\b\m\m\d\q\i\n\2\0\k\4\9\5\m\1\a\1\g\y\a\q\2\w\s\8\e\4\1\p\d\4\2\x\h\1\m\9\t\s\8\9\y\a\2\7\8\g\x\0\m\2\9\3\u\r\d\q\r\v\5\9\q\l\p\b\h\c\j\7\e\q\4\r\v\u\u\0\e\4\i\e\r\5\n\q\j\z\k\x\t\b\d\b\q\j\1\b\3\g\m\y\s\0\9\u\f\u\r\8\m\j\m\f\2\2\g\g\4\x\9\q\t\q\y\k\h\5\w\6\y\x\c\9\o\p\c\6\v\o\i\w\a\i\l\0\x\h\p\x\5\0\z\x\o\d\8\x\w\y\n\d\i\w\7\h\1\c\t\u\2\c\j\6\x\5\u\0\x\d\k\i\l\o\y\2\6\c\g\e\d\1\t\m\t\f\o\2\9\w\v\g\0\m\9\e\g\b\n\k\d\u\1\u\c\0\3\w\l\p\d\l\v\p\c\d\q\9\z\x\v\g\0\5\5\8\a\h\e\b\8\q\u\h\e\a\o\3\c\9\6\4\o\e\3\c\a\x\e\6\h\x\v\0\k\w\l\s\p\0\d\v\k\x\p\w\s\i\g\z\e\b\l\k\e\7\w\k\v\s\i\5\p\q\x\x\z\5\l\a\9\4\2\u\5\q\p\2\o\a\m\t\i\u\g\w\6\i\h\h\1\p\u\f\t\6\i\j\c\k\r\5\s\4\y\8\f\f\6\k\c\q\b\v\x\z\c\0\y\4\z\i\9\z\m\d\i\w\c\1\5\b\y\2\p\k\0\6\2\2\b\s\m\9\2\l\a\w\g\j\d\a\0\p ]] 00:22:45.875 15:59:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:45.875 15:59:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:22:45.875 [2024-07-22 15:59:48.557924] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:45.875 [2024-07-22 15:59:48.558011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58421 ] 00:22:45.875 [2024-07-22 15:59:48.685848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.133 [2024-07-22 15:59:48.742708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.133  Copying: 512/512 [B] (average 250 kBps) 00:22:46.133 00:22:46.133 15:59:48 -- dd/posix.sh@93 -- # [[ 6sza7hwsbycqloohyojbfrxu21gy3ruv3wgr4msw6qgykyewoyod8vi7eo3tspixf1oim2v7w5ax5zf39up78fjs5go13vvanj8tkaolz7vg8nra7pzz2k8hravmyl5zlbmmdqin20k495m1a1gyaq2ws8e41pd42xh1m9ts89ya278gx0m293urdqrv59qlpbhcj7eq4rvuu0e4ier5nqjzkxtbdbqj1b3gmys09ufur8mjmf22gg4x9qtqykh5w6yxc9opc6voiwail0xhpx50zxod8xwyndiw7h1ctu2cj6x5u0xdkiloy26cged1tmtfo29wvg0m9egbnkdu1uc03wlpdlvpcdq9zxvg0558aheb8quheao3c964oe3caxe6hxv0kwlsp0dvkxpwsigzeblke7wkvsi5pqxxz5la942u5qp2oamtiugw6ihh1puft6ijckr5s4y8ff6kcqbvxzc0y4zi9zmdiwc15by2pk0622bsm92lawgjda0p == \6\s\z\a\7\h\w\s\b\y\c\q\l\o\o\h\y\o\j\b\f\r\x\u\2\1\g\y\3\r\u\v\3\w\g\r\4\m\s\w\6\q\g\y\k\y\e\w\o\y\o\d\8\v\i\7\e\o\3\t\s\p\i\x\f\1\o\i\m\2\v\7\w\5\a\x\5\z\f\3\9\u\p\7\8\f\j\s\5\g\o\1\3\v\v\a\n\j\8\t\k\a\o\l\z\7\v\g\8\n\r\a\7\p\z\z\2\k\8\h\r\a\v\m\y\l\5\z\l\b\m\m\d\q\i\n\2\0\k\4\9\5\m\1\a\1\g\y\a\q\2\w\s\8\e\4\1\p\d\4\2\x\h\1\m\9\t\s\8\9\y\a\2\7\8\g\x\0\m\2\9\3\u\r\d\q\r\v\5\9\q\l\p\b\h\c\j\7\e\q\4\r\v\u\u\0\e\4\i\e\r\5\n\q\j\z\k\x\t\b\d\b\q\j\1\b\3\g\m\y\s\0\9\u\f\u\r\8\m\j\m\f\2\2\g\g\4\x\9\q\t\q\y\k\h\5\w\6\y\x\c\9\o\p\c\6\v\o\i\w\a\i\l\0\x\h\p\x\5\0\z\x\o\d\8\x\w\y\n\d\i\w\7\h\1\c\t\u\2\c\j\6\x\5\u\0\x\d\k\i\l\o\y\2\6\c\g\e\d\1\t\m\t\f\o\2\9\w\v\g\0\m\9\e\g\b\n\k\d\u\1\u\c\0\3\w\l\p\d\l\v\p\c\d\q\9\z\x\v\g\0\5\5\8\a\h\e\b\8\q\u\h\e\a\o\3\c\9\6\4\o\e\3\c\a\x\e\6\h\x\v\0\k\w\l\s\p\0\d\v\k\x\p\w\s\i\g\z\e\b\l\k\e\7\w\k\v\s\i\5\p\q\x\x\z\5\l\a\9\4\2\u\5\q\p\2\o\a\m\t\i\u\g\w\6\i\h\h\1\p\u\f\t\6\i\j\c\k\r\5\s\4\y\8\f\f\6\k\c\q\b\v\x\z\c\0\y\4\z\i\9\z\m\d\i\w\c\1\5\b\y\2\p\k\0\6\2\2\b\s\m\9\2\l\a\w\g\j\d\a\0\p ]] 00:22:46.133 15:59:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:46.133 15:59:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:22:46.392 [2024-07-22 15:59:49.020794] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:46.392 [2024-07-22 15:59:49.020894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58424 ] 00:22:46.392 [2024-07-22 15:59:49.152513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.392 [2024-07-22 15:59:49.212002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.650  Copying: 512/512 [B] (average 500 kBps) 00:22:46.650 00:22:46.650 15:59:49 -- dd/posix.sh@93 -- # [[ 6sza7hwsbycqloohyojbfrxu21gy3ruv3wgr4msw6qgykyewoyod8vi7eo3tspixf1oim2v7w5ax5zf39up78fjs5go13vvanj8tkaolz7vg8nra7pzz2k8hravmyl5zlbmmdqin20k495m1a1gyaq2ws8e41pd42xh1m9ts89ya278gx0m293urdqrv59qlpbhcj7eq4rvuu0e4ier5nqjzkxtbdbqj1b3gmys09ufur8mjmf22gg4x9qtqykh5w6yxc9opc6voiwail0xhpx50zxod8xwyndiw7h1ctu2cj6x5u0xdkiloy26cged1tmtfo29wvg0m9egbnkdu1uc03wlpdlvpcdq9zxvg0558aheb8quheao3c964oe3caxe6hxv0kwlsp0dvkxpwsigzeblke7wkvsi5pqxxz5la942u5qp2oamtiugw6ihh1puft6ijckr5s4y8ff6kcqbvxzc0y4zi9zmdiwc15by2pk0622bsm92lawgjda0p == \6\s\z\a\7\h\w\s\b\y\c\q\l\o\o\h\y\o\j\b\f\r\x\u\2\1\g\y\3\r\u\v\3\w\g\r\4\m\s\w\6\q\g\y\k\y\e\w\o\y\o\d\8\v\i\7\e\o\3\t\s\p\i\x\f\1\o\i\m\2\v\7\w\5\a\x\5\z\f\3\9\u\p\7\8\f\j\s\5\g\o\1\3\v\v\a\n\j\8\t\k\a\o\l\z\7\v\g\8\n\r\a\7\p\z\z\2\k\8\h\r\a\v\m\y\l\5\z\l\b\m\m\d\q\i\n\2\0\k\4\9\5\m\1\a\1\g\y\a\q\2\w\s\8\e\4\1\p\d\4\2\x\h\1\m\9\t\s\8\9\y\a\2\7\8\g\x\0\m\2\9\3\u\r\d\q\r\v\5\9\q\l\p\b\h\c\j\7\e\q\4\r\v\u\u\0\e\4\i\e\r\5\n\q\j\z\k\x\t\b\d\b\q\j\1\b\3\g\m\y\s\0\9\u\f\u\r\8\m\j\m\f\2\2\g\g\4\x\9\q\t\q\y\k\h\5\w\6\y\x\c\9\o\p\c\6\v\o\i\w\a\i\l\0\x\h\p\x\5\0\z\x\o\d\8\x\w\y\n\d\i\w\7\h\1\c\t\u\2\c\j\6\x\5\u\0\x\d\k\i\l\o\y\2\6\c\g\e\d\1\t\m\t\f\o\2\9\w\v\g\0\m\9\e\g\b\n\k\d\u\1\u\c\0\3\w\l\p\d\l\v\p\c\d\q\9\z\x\v\g\0\5\5\8\a\h\e\b\8\q\u\h\e\a\o\3\c\9\6\4\o\e\3\c\a\x\e\6\h\x\v\0\k\w\l\s\p\0\d\v\k\x\p\w\s\i\g\z\e\b\l\k\e\7\w\k\v\s\i\5\p\q\x\x\z\5\l\a\9\4\2\u\5\q\p\2\o\a\m\t\i\u\g\w\6\i\h\h\1\p\u\f\t\6\i\j\c\k\r\5\s\4\y\8\f\f\6\k\c\q\b\v\x\z\c\0\y\4\z\i\9\z\m\d\i\w\c\1\5\b\y\2\p\k\0\6\2\2\b\s\m\9\2\l\a\w\g\j\d\a\0\p ]] 00:22:46.650 15:59:49 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:22:46.650 15:59:49 -- dd/posix.sh@86 -- # gen_bytes 512 00:22:46.650 15:59:49 -- dd/common.sh@98 -- # xtrace_disable 00:22:46.650 15:59:49 -- common/autotest_common.sh@10 -- # set +x 00:22:46.650 15:59:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:46.650 15:59:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:22:46.908 [2024-07-22 15:59:49.529163] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:46.908 [2024-07-22 15:59:49.529264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58432 ] 00:22:46.908 [2024-07-22 15:59:49.665078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.908 [2024-07-22 15:59:49.735450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.167  Copying: 512/512 [B] (average 500 kBps) 00:22:47.167 00:22:47.167 15:59:49 -- dd/posix.sh@93 -- # [[ jbyfmpsq5f7zmunwyhc2ljyn0wz5wiu6q6yobit4d2kqg7ko6zfthkaonfznv1ah8wwl307ps1d5xqv2e71g7evt80kdc78rwc2amfbir4cqo58bwblibaqik3a0gbdxjpccyc3k45evtamseslf80draz3wozrpw0qvnleah65og7woa5ff9mw3i550qm8mhfc7ul2e4s3q3nfdrql1jvz4vz52f2xypur341ri2fy0w2ltx65tkqhkbl9bknte6ogl4fz5ppm5ccci9oqpfnjlehva7l88hck799bxh654t2b8ky48cp6gqlmiqy0cpfzzvf1enwz91c1xmcxc5eal0dhi3f372jckyu2znygk7ettsxd400ly50efj8jye3geokp9r81o24yqpa0mup2sonpwak03ialm4wpbvi0m06kk09gzch508s069076kfgfbhucdp29krq9ogp6a3gs19wna6wv8kbu87877b1xix5u7h27hxfvhxd190f2 == \j\b\y\f\m\p\s\q\5\f\7\z\m\u\n\w\y\h\c\2\l\j\y\n\0\w\z\5\w\i\u\6\q\6\y\o\b\i\t\4\d\2\k\q\g\7\k\o\6\z\f\t\h\k\a\o\n\f\z\n\v\1\a\h\8\w\w\l\3\0\7\p\s\1\d\5\x\q\v\2\e\7\1\g\7\e\v\t\8\0\k\d\c\7\8\r\w\c\2\a\m\f\b\i\r\4\c\q\o\5\8\b\w\b\l\i\b\a\q\i\k\3\a\0\g\b\d\x\j\p\c\c\y\c\3\k\4\5\e\v\t\a\m\s\e\s\l\f\8\0\d\r\a\z\3\w\o\z\r\p\w\0\q\v\n\l\e\a\h\6\5\o\g\7\w\o\a\5\f\f\9\m\w\3\i\5\5\0\q\m\8\m\h\f\c\7\u\l\2\e\4\s\3\q\3\n\f\d\r\q\l\1\j\v\z\4\v\z\5\2\f\2\x\y\p\u\r\3\4\1\r\i\2\f\y\0\w\2\l\t\x\6\5\t\k\q\h\k\b\l\9\b\k\n\t\e\6\o\g\l\4\f\z\5\p\p\m\5\c\c\c\i\9\o\q\p\f\n\j\l\e\h\v\a\7\l\8\8\h\c\k\7\9\9\b\x\h\6\5\4\t\2\b\8\k\y\4\8\c\p\6\g\q\l\m\i\q\y\0\c\p\f\z\z\v\f\1\e\n\w\z\9\1\c\1\x\m\c\x\c\5\e\a\l\0\d\h\i\3\f\3\7\2\j\c\k\y\u\2\z\n\y\g\k\7\e\t\t\s\x\d\4\0\0\l\y\5\0\e\f\j\8\j\y\e\3\g\e\o\k\p\9\r\8\1\o\2\4\y\q\p\a\0\m\u\p\2\s\o\n\p\w\a\k\0\3\i\a\l\m\4\w\p\b\v\i\0\m\0\6\k\k\0\9\g\z\c\h\5\0\8\s\0\6\9\0\7\6\k\f\g\f\b\h\u\c\d\p\2\9\k\r\q\9\o\g\p\6\a\3\g\s\1\9\w\n\a\6\w\v\8\k\b\u\8\7\8\7\7\b\1\x\i\x\5\u\7\h\2\7\h\x\f\v\h\x\d\1\9\0\f\2 ]] 00:22:47.167 15:59:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:47.167 15:59:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:22:47.425 [2024-07-22 15:59:50.047295] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:47.425 [2024-07-22 15:59:50.047397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58439 ] 00:22:47.425 [2024-07-22 15:59:50.192468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.425 [2024-07-22 15:59:50.250902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.683  Copying: 512/512 [B] (average 500 kBps) 00:22:47.683 00:22:47.684 15:59:50 -- dd/posix.sh@93 -- # [[ jbyfmpsq5f7zmunwyhc2ljyn0wz5wiu6q6yobit4d2kqg7ko6zfthkaonfznv1ah8wwl307ps1d5xqv2e71g7evt80kdc78rwc2amfbir4cqo58bwblibaqik3a0gbdxjpccyc3k45evtamseslf80draz3wozrpw0qvnleah65og7woa5ff9mw3i550qm8mhfc7ul2e4s3q3nfdrql1jvz4vz52f2xypur341ri2fy0w2ltx65tkqhkbl9bknte6ogl4fz5ppm5ccci9oqpfnjlehva7l88hck799bxh654t2b8ky48cp6gqlmiqy0cpfzzvf1enwz91c1xmcxc5eal0dhi3f372jckyu2znygk7ettsxd400ly50efj8jye3geokp9r81o24yqpa0mup2sonpwak03ialm4wpbvi0m06kk09gzch508s069076kfgfbhucdp29krq9ogp6a3gs19wna6wv8kbu87877b1xix5u7h27hxfvhxd190f2 == \j\b\y\f\m\p\s\q\5\f\7\z\m\u\n\w\y\h\c\2\l\j\y\n\0\w\z\5\w\i\u\6\q\6\y\o\b\i\t\4\d\2\k\q\g\7\k\o\6\z\f\t\h\k\a\o\n\f\z\n\v\1\a\h\8\w\w\l\3\0\7\p\s\1\d\5\x\q\v\2\e\7\1\g\7\e\v\t\8\0\k\d\c\7\8\r\w\c\2\a\m\f\b\i\r\4\c\q\o\5\8\b\w\b\l\i\b\a\q\i\k\3\a\0\g\b\d\x\j\p\c\c\y\c\3\k\4\5\e\v\t\a\m\s\e\s\l\f\8\0\d\r\a\z\3\w\o\z\r\p\w\0\q\v\n\l\e\a\h\6\5\o\g\7\w\o\a\5\f\f\9\m\w\3\i\5\5\0\q\m\8\m\h\f\c\7\u\l\2\e\4\s\3\q\3\n\f\d\r\q\l\1\j\v\z\4\v\z\5\2\f\2\x\y\p\u\r\3\4\1\r\i\2\f\y\0\w\2\l\t\x\6\5\t\k\q\h\k\b\l\9\b\k\n\t\e\6\o\g\l\4\f\z\5\p\p\m\5\c\c\c\i\9\o\q\p\f\n\j\l\e\h\v\a\7\l\8\8\h\c\k\7\9\9\b\x\h\6\5\4\t\2\b\8\k\y\4\8\c\p\6\g\q\l\m\i\q\y\0\c\p\f\z\z\v\f\1\e\n\w\z\9\1\c\1\x\m\c\x\c\5\e\a\l\0\d\h\i\3\f\3\7\2\j\c\k\y\u\2\z\n\y\g\k\7\e\t\t\s\x\d\4\0\0\l\y\5\0\e\f\j\8\j\y\e\3\g\e\o\k\p\9\r\8\1\o\2\4\y\q\p\a\0\m\u\p\2\s\o\n\p\w\a\k\0\3\i\a\l\m\4\w\p\b\v\i\0\m\0\6\k\k\0\9\g\z\c\h\5\0\8\s\0\6\9\0\7\6\k\f\g\f\b\h\u\c\d\p\2\9\k\r\q\9\o\g\p\6\a\3\g\s\1\9\w\n\a\6\w\v\8\k\b\u\8\7\8\7\7\b\1\x\i\x\5\u\7\h\2\7\h\x\f\v\h\x\d\1\9\0\f\2 ]] 00:22:47.684 15:59:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:47.684 15:59:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:22:47.684 [2024-07-22 15:59:50.541652] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:47.684 [2024-07-22 15:59:50.541758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58447 ] 00:22:47.941 [2024-07-22 15:59:50.675534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.941 [2024-07-22 15:59:50.757326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.200  Copying: 512/512 [B] (average 500 kBps) 00:22:48.200 00:22:48.200 15:59:51 -- dd/posix.sh@93 -- # [[ jbyfmpsq5f7zmunwyhc2ljyn0wz5wiu6q6yobit4d2kqg7ko6zfthkaonfznv1ah8wwl307ps1d5xqv2e71g7evt80kdc78rwc2amfbir4cqo58bwblibaqik3a0gbdxjpccyc3k45evtamseslf80draz3wozrpw0qvnleah65og7woa5ff9mw3i550qm8mhfc7ul2e4s3q3nfdrql1jvz4vz52f2xypur341ri2fy0w2ltx65tkqhkbl9bknte6ogl4fz5ppm5ccci9oqpfnjlehva7l88hck799bxh654t2b8ky48cp6gqlmiqy0cpfzzvf1enwz91c1xmcxc5eal0dhi3f372jckyu2znygk7ettsxd400ly50efj8jye3geokp9r81o24yqpa0mup2sonpwak03ialm4wpbvi0m06kk09gzch508s069076kfgfbhucdp29krq9ogp6a3gs19wna6wv8kbu87877b1xix5u7h27hxfvhxd190f2 == \j\b\y\f\m\p\s\q\5\f\7\z\m\u\n\w\y\h\c\2\l\j\y\n\0\w\z\5\w\i\u\6\q\6\y\o\b\i\t\4\d\2\k\q\g\7\k\o\6\z\f\t\h\k\a\o\n\f\z\n\v\1\a\h\8\w\w\l\3\0\7\p\s\1\d\5\x\q\v\2\e\7\1\g\7\e\v\t\8\0\k\d\c\7\8\r\w\c\2\a\m\f\b\i\r\4\c\q\o\5\8\b\w\b\l\i\b\a\q\i\k\3\a\0\g\b\d\x\j\p\c\c\y\c\3\k\4\5\e\v\t\a\m\s\e\s\l\f\8\0\d\r\a\z\3\w\o\z\r\p\w\0\q\v\n\l\e\a\h\6\5\o\g\7\w\o\a\5\f\f\9\m\w\3\i\5\5\0\q\m\8\m\h\f\c\7\u\l\2\e\4\s\3\q\3\n\f\d\r\q\l\1\j\v\z\4\v\z\5\2\f\2\x\y\p\u\r\3\4\1\r\i\2\f\y\0\w\2\l\t\x\6\5\t\k\q\h\k\b\l\9\b\k\n\t\e\6\o\g\l\4\f\z\5\p\p\m\5\c\c\c\i\9\o\q\p\f\n\j\l\e\h\v\a\7\l\8\8\h\c\k\7\9\9\b\x\h\6\5\4\t\2\b\8\k\y\4\8\c\p\6\g\q\l\m\i\q\y\0\c\p\f\z\z\v\f\1\e\n\w\z\9\1\c\1\x\m\c\x\c\5\e\a\l\0\d\h\i\3\f\3\7\2\j\c\k\y\u\2\z\n\y\g\k\7\e\t\t\s\x\d\4\0\0\l\y\5\0\e\f\j\8\j\y\e\3\g\e\o\k\p\9\r\8\1\o\2\4\y\q\p\a\0\m\u\p\2\s\o\n\p\w\a\k\0\3\i\a\l\m\4\w\p\b\v\i\0\m\0\6\k\k\0\9\g\z\c\h\5\0\8\s\0\6\9\0\7\6\k\f\g\f\b\h\u\c\d\p\2\9\k\r\q\9\o\g\p\6\a\3\g\s\1\9\w\n\a\6\w\v\8\k\b\u\8\7\8\7\7\b\1\x\i\x\5\u\7\h\2\7\h\x\f\v\h\x\d\1\9\0\f\2 ]] 00:22:48.200 15:59:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:22:48.200 15:59:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:22:48.458 [2024-07-22 15:59:51.069274] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:48.458 [2024-07-22 15:59:51.069398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58454 ] 00:22:48.458 [2024-07-22 15:59:51.214007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.458 [2024-07-22 15:59:51.297650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.717  Copying: 512/512 [B] (average 500 kBps) 00:22:48.717 00:22:48.717 15:59:51 -- dd/posix.sh@93 -- # [[ jbyfmpsq5f7zmunwyhc2ljyn0wz5wiu6q6yobit4d2kqg7ko6zfthkaonfznv1ah8wwl307ps1d5xqv2e71g7evt80kdc78rwc2amfbir4cqo58bwblibaqik3a0gbdxjpccyc3k45evtamseslf80draz3wozrpw0qvnleah65og7woa5ff9mw3i550qm8mhfc7ul2e4s3q3nfdrql1jvz4vz52f2xypur341ri2fy0w2ltx65tkqhkbl9bknte6ogl4fz5ppm5ccci9oqpfnjlehva7l88hck799bxh654t2b8ky48cp6gqlmiqy0cpfzzvf1enwz91c1xmcxc5eal0dhi3f372jckyu2znygk7ettsxd400ly50efj8jye3geokp9r81o24yqpa0mup2sonpwak03ialm4wpbvi0m06kk09gzch508s069076kfgfbhucdp29krq9ogp6a3gs19wna6wv8kbu87877b1xix5u7h27hxfvhxd190f2 == \j\b\y\f\m\p\s\q\5\f\7\z\m\u\n\w\y\h\c\2\l\j\y\n\0\w\z\5\w\i\u\6\q\6\y\o\b\i\t\4\d\2\k\q\g\7\k\o\6\z\f\t\h\k\a\o\n\f\z\n\v\1\a\h\8\w\w\l\3\0\7\p\s\1\d\5\x\q\v\2\e\7\1\g\7\e\v\t\8\0\k\d\c\7\8\r\w\c\2\a\m\f\b\i\r\4\c\q\o\5\8\b\w\b\l\i\b\a\q\i\k\3\a\0\g\b\d\x\j\p\c\c\y\c\3\k\4\5\e\v\t\a\m\s\e\s\l\f\8\0\d\r\a\z\3\w\o\z\r\p\w\0\q\v\n\l\e\a\h\6\5\o\g\7\w\o\a\5\f\f\9\m\w\3\i\5\5\0\q\m\8\m\h\f\c\7\u\l\2\e\4\s\3\q\3\n\f\d\r\q\l\1\j\v\z\4\v\z\5\2\f\2\x\y\p\u\r\3\4\1\r\i\2\f\y\0\w\2\l\t\x\6\5\t\k\q\h\k\b\l\9\b\k\n\t\e\6\o\g\l\4\f\z\5\p\p\m\5\c\c\c\i\9\o\q\p\f\n\j\l\e\h\v\a\7\l\8\8\h\c\k\7\9\9\b\x\h\6\5\4\t\2\b\8\k\y\4\8\c\p\6\g\q\l\m\i\q\y\0\c\p\f\z\z\v\f\1\e\n\w\z\9\1\c\1\x\m\c\x\c\5\e\a\l\0\d\h\i\3\f\3\7\2\j\c\k\y\u\2\z\n\y\g\k\7\e\t\t\s\x\d\4\0\0\l\y\5\0\e\f\j\8\j\y\e\3\g\e\o\k\p\9\r\8\1\o\2\4\y\q\p\a\0\m\u\p\2\s\o\n\p\w\a\k\0\3\i\a\l\m\4\w\p\b\v\i\0\m\0\6\k\k\0\9\g\z\c\h\5\0\8\s\0\6\9\0\7\6\k\f\g\f\b\h\u\c\d\p\2\9\k\r\q\9\o\g\p\6\a\3\g\s\1\9\w\n\a\6\w\v\8\k\b\u\8\7\8\7\7\b\1\x\i\x\5\u\7\h\2\7\h\x\f\v\h\x\d\1\9\0\f\2 ]] 00:22:48.717 00:22:48.717 real 0m4.059s 00:22:48.717 user 0m2.300s 00:22:48.717 sys 0m0.758s 00:22:48.717 15:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.717 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:48.717 ************************************ 00:22:48.717 END TEST dd_flags_misc_forced_aio 00:22:48.717 ************************************ 00:22:48.717 15:59:51 -- dd/posix.sh@1 -- # cleanup 00:22:48.717 15:59:51 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:22:48.717 15:59:51 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:22:48.717 00:22:48.717 real 0m18.748s 00:22:48.717 user 0m9.472s 00:22:48.717 sys 0m3.413s 00:22:48.717 15:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.717 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:48.717 ************************************ 00:22:48.717 END TEST spdk_dd_posix 00:22:48.717 ************************************ 00:22:48.978 15:59:51 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:22:48.978 15:59:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:48.978 15:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:48.978 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:48.978 ************************************ 00:22:48.978 START TEST spdk_dd_malloc 00:22:48.978 ************************************ 00:22:48.978 15:59:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:22:48.978 * Looking for test storage... 00:22:48.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:22:48.978 15:59:51 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:48.978 15:59:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.978 15:59:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.978 15:59:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.978 15:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.979 15:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.979 15:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.979 15:59:51 -- paths/export.sh@5 -- # export PATH 00:22:48.979 15:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.979 15:59:51 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:22:48.979 15:59:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:48.979 15:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:48.979 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:48.979 ************************************ 00:22:48.979 START TEST dd_malloc_copy 00:22:48.979 ************************************ 00:22:48.979 15:59:51 -- common/autotest_common.sh@1104 -- # malloc_copy 00:22:48.979 15:59:51 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:22:48.979 15:59:51 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:22:48.979 15:59:51 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:22:48.979 15:59:51 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:22:48.979 15:59:51 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:22:48.979 15:59:51 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:22:48.979 15:59:51 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:22:48.979 15:59:51 -- dd/malloc.sh@28 -- # gen_conf 00:22:48.979 15:59:51 -- dd/common.sh@31 -- # xtrace_disable 00:22:48.979 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:48.979 [2024-07-22 15:59:51.759192] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:48.979 [2024-07-22 15:59:51.759319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58526 ] 00:22:48.979 { 00:22:48.979 "subsystems": [ 00:22:48.979 { 00:22:48.979 "subsystem": "bdev", 00:22:48.979 "config": [ 00:22:48.979 { 00:22:48.979 "params": { 00:22:48.979 "block_size": 512, 00:22:48.979 "num_blocks": 1048576, 00:22:48.979 "name": "malloc0" 00:22:48.979 }, 00:22:48.979 "method": "bdev_malloc_create" 00:22:48.979 }, 00:22:48.979 { 00:22:48.979 "params": { 00:22:48.979 "block_size": 512, 00:22:48.979 "num_blocks": 1048576, 00:22:48.979 "name": "malloc1" 00:22:48.979 }, 00:22:48.979 "method": "bdev_malloc_create" 00:22:48.979 }, 00:22:48.979 { 00:22:48.979 "method": "bdev_wait_for_examine" 00:22:48.979 } 00:22:48.979 ] 00:22:48.979 } 00:22:48.979 ] 00:22:48.979 } 00:22:49.238 [2024-07-22 15:59:51.900994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.238 [2024-07-22 15:59:51.972525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.367  Copying: 198/512 [MB] (198 MBps) Copying: 385/512 [MB] (186 MBps) Copying: 512/512 [MB] (average 194 MBps) 00:22:52.368 00:22:52.368 15:59:55 -- dd/malloc.sh@33 -- # gen_conf 00:22:52.368 15:59:55 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:22:52.368 15:59:55 -- dd/common.sh@31 -- # xtrace_disable 00:22:52.368 15:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:52.626 [2024-07-22 15:59:55.250656] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:52.626 [2024-07-22 15:59:55.250752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58575 ] 00:22:52.626 { 00:22:52.626 "subsystems": [ 00:22:52.626 { 00:22:52.626 "subsystem": "bdev", 00:22:52.626 "config": [ 00:22:52.626 { 00:22:52.626 "params": { 00:22:52.626 "block_size": 512, 00:22:52.626 "num_blocks": 1048576, 00:22:52.626 "name": "malloc0" 00:22:52.626 }, 00:22:52.626 "method": "bdev_malloc_create" 00:22:52.626 }, 00:22:52.626 { 00:22:52.626 "params": { 00:22:52.626 "block_size": 512, 00:22:52.626 "num_blocks": 1048576, 00:22:52.626 "name": "malloc1" 00:22:52.626 }, 00:22:52.626 "method": "bdev_malloc_create" 00:22:52.626 }, 00:22:52.626 { 00:22:52.626 "method": "bdev_wait_for_examine" 00:22:52.626 } 00:22:52.626 ] 00:22:52.626 } 00:22:52.626 ] 00:22:52.626 } 00:22:52.626 [2024-07-22 15:59:55.379769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.627 [2024-07-22 15:59:55.436855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.865  Copying: 192/512 [MB] (192 MBps) Copying: 390/512 [MB] (197 MBps) Copying: 512/512 [MB] (average 194 MBps) 00:22:55.865 00:22:55.865 00:22:55.865 real 0m6.981s 00:22:55.865 user 0m6.323s 00:22:55.865 sys 0m0.499s 00:22:55.865 15:59:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.865 ************************************ 00:22:55.865 END TEST dd_malloc_copy 00:22:55.865 ************************************ 00:22:55.865 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:55.865 00:22:55.865 real 0m7.096s 00:22:55.865 user 0m6.368s 00:22:55.865 sys 0m0.567s 00:22:55.865 15:59:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.865 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:55.865 ************************************ 00:22:55.865 END TEST spdk_dd_malloc 00:22:55.865 ************************************ 00:22:56.123 15:59:58 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:22:56.123 15:59:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:22:56.123 15:59:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:56.123 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.123 ************************************ 00:22:56.123 START TEST spdk_dd_bdev_to_bdev 00:22:56.123 ************************************ 00:22:56.123 15:59:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:22:56.123 * Looking for test storage... 00:22:56.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:22:56.123 15:59:58 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:56.123 15:59:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.123 15:59:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.123 15:59:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.123 15:59:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.123 15:59:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.123 15:59:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.123 15:59:58 -- paths/export.sh@5 -- # export PATH 00:22:56.123 15:59:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:22:56.123 15:59:58 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:22:56.123 15:59:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:22:56.123 15:59:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:56.123 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:56.123 ************************************ 00:22:56.123 START TEST dd_inflate_file 00:22:56.123 ************************************ 00:22:56.123 15:59:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:22:56.123 [2024-07-22 15:59:58.881054] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:56.123 [2024-07-22 15:59:58.881153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58673 ] 00:22:56.381 [2024-07-22 15:59:59.015338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.381 [2024-07-22 15:59:59.117954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.639  Copying: 64/64 [MB] (average 1523 MBps) 00:22:56.639 00:22:56.639 00:22:56.639 real 0m0.603s 00:22:56.639 user 0m0.322s 00:22:56.639 sys 0m0.153s 00:22:56.639 15:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.639 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:56.639 ************************************ 00:22:56.639 END TEST dd_inflate_file 00:22:56.639 ************************************ 00:22:56.639 15:59:59 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:22:56.639 15:59:59 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:22:56.639 15:59:59 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:22:56.639 15:59:59 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:22:56.639 15:59:59 -- dd/common.sh@31 -- # xtrace_disable 00:22:56.639 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:56.639 15:59:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:56.639 15:59:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:56.639 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:22:56.639 ************************************ 00:22:56.639 START TEST dd_copy_to_out_bdev 00:22:56.639 ************************************ 00:22:56.639 15:59:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:22:56.898 [2024-07-22 15:59:59.543629] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:56.898 [2024-07-22 15:59:59.543746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58711 ] 00:22:56.898 { 00:22:56.898 "subsystems": [ 00:22:56.898 { 00:22:56.898 "subsystem": "bdev", 00:22:56.898 "config": [ 00:22:56.898 { 00:22:56.898 "params": { 00:22:56.898 "trtype": "pcie", 00:22:56.898 "traddr": "0000:00:06.0", 00:22:56.898 "name": "Nvme0" 00:22:56.898 }, 00:22:56.898 "method": "bdev_nvme_attach_controller" 00:22:56.898 }, 00:22:56.898 { 00:22:56.898 "params": { 00:22:56.898 "trtype": "pcie", 00:22:56.898 "traddr": "0000:00:07.0", 00:22:56.898 "name": "Nvme1" 00:22:56.898 }, 00:22:56.898 "method": "bdev_nvme_attach_controller" 00:22:56.898 }, 00:22:56.898 { 00:22:56.898 "method": "bdev_wait_for_examine" 00:22:56.898 } 00:22:56.898 ] 00:22:56.898 } 00:22:56.898 ] 00:22:56.898 } 00:22:56.898 [2024-07-22 15:59:59.687523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.898 [2024-07-22 15:59:59.747184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.558  Copying: 61/64 [MB] (61 MBps) Copying: 64/64 [MB] (average 61 MBps) 00:22:58.558 00:22:58.558 00:22:58.558 real 0m1.711s 00:22:58.558 user 0m1.448s 00:22:58.558 sys 0m0.193s 00:22:58.558 16:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:58.558 16:00:01 -- common/autotest_common.sh@10 -- # set +x 00:22:58.558 ************************************ 00:22:58.558 END TEST dd_copy_to_out_bdev 00:22:58.558 ************************************ 00:22:58.558 16:00:01 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:22:58.558 16:00:01 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:22:58.558 16:00:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:58.558 16:00:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:58.558 16:00:01 -- common/autotest_common.sh@10 -- # set +x 00:22:58.558 ************************************ 00:22:58.558 START TEST dd_offset_magic 00:22:58.558 ************************************ 00:22:58.558 16:00:01 -- common/autotest_common.sh@1104 -- # offset_magic 00:22:58.558 16:00:01 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:22:58.558 16:00:01 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:22:58.558 16:00:01 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:22:58.559 16:00:01 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:22:58.559 16:00:01 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:22:58.559 16:00:01 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:22:58.559 16:00:01 -- dd/common.sh@31 -- # xtrace_disable 00:22:58.559 16:00:01 -- common/autotest_common.sh@10 -- # set +x 00:22:58.559 { 00:22:58.559 "subsystems": [ 00:22:58.559 { 00:22:58.559 "subsystem": "bdev", 00:22:58.559 "config": [ 00:22:58.559 { 00:22:58.559 "params": { 00:22:58.559 "trtype": "pcie", 00:22:58.559 "traddr": "0000:00:06.0", 00:22:58.559 "name": "Nvme0" 00:22:58.559 }, 00:22:58.559 "method": "bdev_nvme_attach_controller" 00:22:58.559 }, 00:22:58.559 { 00:22:58.559 "params": { 00:22:58.559 "trtype": "pcie", 00:22:58.559 "traddr": "0000:00:07.0", 00:22:58.559 "name": "Nvme1" 00:22:58.559 }, 00:22:58.559 "method": "bdev_nvme_attach_controller" 00:22:58.559 }, 00:22:58.559 { 00:22:58.559 "method": "bdev_wait_for_examine" 00:22:58.559 } 00:22:58.559 ] 00:22:58.559 } 00:22:58.559 ] 00:22:58.559 } 00:22:58.559 [2024-07-22 16:00:01.304548] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:58.559 [2024-07-22 16:00:01.304665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58752 ] 00:22:58.817 [2024-07-22 16:00:01.445263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.817 [2024-07-22 16:00:01.527590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.330  Copying: 65/65 [MB] (average 1300 MBps) 00:22:59.330 00:22:59.330 16:00:02 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:22:59.330 16:00:02 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:22:59.330 16:00:02 -- dd/common.sh@31 -- # xtrace_disable 00:22:59.330 16:00:02 -- common/autotest_common.sh@10 -- # set +x 00:22:59.330 [2024-07-22 16:00:02.059610] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:59.330 [2024-07-22 16:00:02.059727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58766 ] 00:22:59.330 { 00:22:59.330 "subsystems": [ 00:22:59.330 { 00:22:59.330 "subsystem": "bdev", 00:22:59.330 "config": [ 00:22:59.330 { 00:22:59.330 "params": { 00:22:59.330 "trtype": "pcie", 00:22:59.330 "traddr": "0000:00:06.0", 00:22:59.330 "name": "Nvme0" 00:22:59.330 }, 00:22:59.330 "method": "bdev_nvme_attach_controller" 00:22:59.330 }, 00:22:59.330 { 00:22:59.330 "params": { 00:22:59.330 "trtype": "pcie", 00:22:59.330 "traddr": "0000:00:07.0", 00:22:59.330 "name": "Nvme1" 00:22:59.330 }, 00:22:59.330 "method": "bdev_nvme_attach_controller" 00:22:59.330 }, 00:22:59.330 { 00:22:59.330 "method": "bdev_wait_for_examine" 00:22:59.330 } 00:22:59.330 ] 00:22:59.330 } 00:22:59.330 ] 00:22:59.330 } 00:22:59.594 [2024-07-22 16:00:02.204369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.594 [2024-07-22 16:00:02.275098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.853  Copying: 1024/1024 [kB] (average 1000 MBps) 00:22:59.853 00:22:59.853 16:00:02 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:22:59.853 16:00:02 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:22:59.853 16:00:02 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:22:59.853 16:00:02 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:22:59.853 16:00:02 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:22:59.853 16:00:02 -- dd/common.sh@31 -- # xtrace_disable 00:22:59.853 16:00:02 -- common/autotest_common.sh@10 -- # set +x 00:22:59.853 [2024-07-22 16:00:02.709432] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:59.853 [2024-07-22 16:00:02.709537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58786 ] 00:23:00.111 { 00:23:00.111 "subsystems": [ 00:23:00.111 { 00:23:00.111 "subsystem": "bdev", 00:23:00.111 "config": [ 00:23:00.111 { 00:23:00.111 "params": { 00:23:00.111 "trtype": "pcie", 00:23:00.111 "traddr": "0000:00:06.0", 00:23:00.111 "name": "Nvme0" 00:23:00.111 }, 00:23:00.111 "method": "bdev_nvme_attach_controller" 00:23:00.111 }, 00:23:00.111 { 00:23:00.111 "params": { 00:23:00.111 "trtype": "pcie", 00:23:00.111 "traddr": "0000:00:07.0", 00:23:00.111 "name": "Nvme1" 00:23:00.111 }, 00:23:00.111 "method": "bdev_nvme_attach_controller" 00:23:00.111 }, 00:23:00.111 { 00:23:00.111 "method": "bdev_wait_for_examine" 00:23:00.111 } 00:23:00.111 ] 00:23:00.111 } 00:23:00.111 ] 00:23:00.111 } 00:23:00.111 [2024-07-22 16:00:02.842226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.111 [2024-07-22 16:00:02.913847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.625  Copying: 65/65 [MB] (average 812 MBps) 00:23:00.625 00:23:00.625 16:00:03 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:23:00.625 16:00:03 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:23:00.625 16:00:03 -- dd/common.sh@31 -- # xtrace_disable 00:23:00.625 16:00:03 -- common/autotest_common.sh@10 -- # set +x 00:23:00.625 [2024-07-22 16:00:03.450451] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:00.625 [2024-07-22 16:00:03.450619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58801 ] 00:23:00.625 { 00:23:00.625 "subsystems": [ 00:23:00.625 { 00:23:00.625 "subsystem": "bdev", 00:23:00.625 "config": [ 00:23:00.625 { 00:23:00.625 "params": { 00:23:00.625 "trtype": "pcie", 00:23:00.625 "traddr": "0000:00:06.0", 00:23:00.625 "name": "Nvme0" 00:23:00.625 }, 00:23:00.625 "method": "bdev_nvme_attach_controller" 00:23:00.625 }, 00:23:00.625 { 00:23:00.625 "params": { 00:23:00.625 "trtype": "pcie", 00:23:00.625 "traddr": "0000:00:07.0", 00:23:00.625 "name": "Nvme1" 00:23:00.625 }, 00:23:00.625 "method": "bdev_nvme_attach_controller" 00:23:00.625 }, 00:23:00.625 { 00:23:00.625 "method": "bdev_wait_for_examine" 00:23:00.625 } 00:23:00.625 ] 00:23:00.625 } 00:23:00.625 ] 00:23:00.625 } 00:23:00.883 [2024-07-22 16:00:03.587315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.883 [2024-07-22 16:00:03.673592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.398  Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:01.398 00:23:01.398 16:00:04 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:23:01.398 16:00:04 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:23:01.398 00:23:01.398 real 0m2.837s 00:23:01.398 user 0m2.147s 00:23:01.398 sys 0m0.480s 00:23:01.398 16:00:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.398 16:00:04 -- common/autotest_common.sh@10 -- # set +x 00:23:01.398 ************************************ 00:23:01.398 END TEST dd_offset_magic 00:23:01.398 ************************************ 00:23:01.398 16:00:04 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:23:01.398 16:00:04 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:23:01.398 16:00:04 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:01.398 16:00:04 -- dd/common.sh@11 -- # local nvme_ref= 00:23:01.398 16:00:04 -- dd/common.sh@12 -- # local size=4194330 00:23:01.398 16:00:04 -- dd/common.sh@14 -- # local bs=1048576 00:23:01.398 16:00:04 -- dd/common.sh@15 -- # local count=5 00:23:01.398 16:00:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:23:01.398 16:00:04 -- dd/common.sh@18 -- # gen_conf 00:23:01.398 16:00:04 -- dd/common.sh@31 -- # xtrace_disable 00:23:01.398 16:00:04 -- common/autotest_common.sh@10 -- # set +x 00:23:01.398 [2024-07-22 16:00:04.169444] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:01.398 [2024-07-22 16:00:04.169585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58830 ] 00:23:01.398 { 00:23:01.398 "subsystems": [ 00:23:01.398 { 00:23:01.398 "subsystem": "bdev", 00:23:01.398 "config": [ 00:23:01.398 { 00:23:01.398 "params": { 00:23:01.398 "trtype": "pcie", 00:23:01.398 "traddr": "0000:00:06.0", 00:23:01.398 "name": "Nvme0" 00:23:01.398 }, 00:23:01.398 "method": "bdev_nvme_attach_controller" 00:23:01.398 }, 00:23:01.398 { 00:23:01.398 "params": { 00:23:01.398 "trtype": "pcie", 00:23:01.398 "traddr": "0000:00:07.0", 00:23:01.398 "name": "Nvme1" 00:23:01.398 }, 00:23:01.398 "method": "bdev_nvme_attach_controller" 00:23:01.398 }, 00:23:01.398 { 00:23:01.398 "method": "bdev_wait_for_examine" 00:23:01.398 } 00:23:01.398 ] 00:23:01.398 } 00:23:01.399 ] 00:23:01.399 } 00:23:01.656 [2024-07-22 16:00:04.309935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.656 [2024-07-22 16:00:04.370093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.915  Copying: 5120/5120 [kB] (average 1666 MBps) 00:23:01.915 00:23:01.915 16:00:04 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:23:01.915 16:00:04 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:23:01.915 16:00:04 -- dd/common.sh@11 -- # local nvme_ref= 00:23:01.915 16:00:04 -- dd/common.sh@12 -- # local size=4194330 00:23:01.915 16:00:04 -- dd/common.sh@14 -- # local bs=1048576 00:23:01.915 16:00:04 -- dd/common.sh@15 -- # local count=5 00:23:01.915 16:00:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:23:01.915 16:00:04 -- dd/common.sh@18 -- # gen_conf 00:23:01.915 16:00:04 -- dd/common.sh@31 -- # xtrace_disable 00:23:01.915 16:00:04 -- common/autotest_common.sh@10 -- # set +x 00:23:02.173 [2024-07-22 16:00:04.828260] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:02.173 [2024-07-22 16:00:04.828377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:23:02.173 { 00:23:02.173 "subsystems": [ 00:23:02.173 { 00:23:02.173 "subsystem": "bdev", 00:23:02.173 "config": [ 00:23:02.173 { 00:23:02.173 "params": { 00:23:02.173 "trtype": "pcie", 00:23:02.173 "traddr": "0000:00:06.0", 00:23:02.173 "name": "Nvme0" 00:23:02.173 }, 00:23:02.173 "method": "bdev_nvme_attach_controller" 00:23:02.173 }, 00:23:02.173 { 00:23:02.173 "params": { 00:23:02.173 "trtype": "pcie", 00:23:02.173 "traddr": "0000:00:07.0", 00:23:02.173 "name": "Nvme1" 00:23:02.173 }, 00:23:02.173 "method": "bdev_nvme_attach_controller" 00:23:02.173 }, 00:23:02.173 { 00:23:02.173 "method": "bdev_wait_for_examine" 00:23:02.173 } 00:23:02.173 ] 00:23:02.173 } 00:23:02.173 ] 00:23:02.173 } 00:23:02.173 [2024-07-22 16:00:04.964117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.430 [2024-07-22 16:00:05.038084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.716  Copying: 5120/5120 [kB] (average 1000 MBps) 00:23:02.716 00:23:02.716 16:00:05 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:23:02.716 00:23:02.716 real 0m6.706s 00:23:02.716 user 0m4.946s 00:23:02.716 sys 0m1.212s 00:23:02.716 16:00:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.716 16:00:05 -- common/autotest_common.sh@10 -- # set +x 00:23:02.716 ************************************ 00:23:02.716 END TEST spdk_dd_bdev_to_bdev 00:23:02.716 ************************************ 00:23:02.716 16:00:05 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:23:02.716 16:00:05 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:23:02.716 16:00:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:02.716 16:00:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:02.716 16:00:05 -- common/autotest_common.sh@10 -- # set +x 00:23:02.716 ************************************ 00:23:02.716 START TEST spdk_dd_uring 00:23:02.716 ************************************ 00:23:02.716 16:00:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:23:02.716 * Looking for test storage... 00:23:02.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:23:02.716 16:00:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:02.716 16:00:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.716 16:00:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.716 16:00:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.716 16:00:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.716 16:00:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.716 16:00:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.716 16:00:05 -- paths/export.sh@5 -- # export PATH 00:23:02.716 16:00:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.716 16:00:05 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:23:02.716 16:00:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:02.716 16:00:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:02.716 16:00:05 -- common/autotest_common.sh@10 -- # set +x 00:23:02.974 ************************************ 00:23:02.974 START TEST dd_uring_copy 00:23:02.974 ************************************ 00:23:02.974 16:00:05 -- common/autotest_common.sh@1104 -- # uring_zram_copy 00:23:02.974 16:00:05 -- dd/uring.sh@15 -- # local zram_dev_id 00:23:02.974 16:00:05 -- dd/uring.sh@16 -- # local magic 00:23:02.974 16:00:05 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:23:02.974 16:00:05 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:23:02.974 16:00:05 -- dd/uring.sh@19 -- # local verify_magic 00:23:02.974 16:00:05 -- dd/uring.sh@21 -- # init_zram 00:23:02.974 16:00:05 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:23:02.974 16:00:05 -- dd/common.sh@164 -- # return 00:23:02.974 16:00:05 -- dd/uring.sh@22 -- # create_zram_dev 00:23:02.974 16:00:05 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:23:02.974 16:00:05 -- dd/uring.sh@22 -- # zram_dev_id=1 00:23:02.974 16:00:05 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:23:02.974 16:00:05 -- dd/common.sh@181 -- # local id=1 00:23:02.974 16:00:05 -- dd/common.sh@182 -- # local size=512M 00:23:02.974 16:00:05 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:23:02.974 16:00:05 -- dd/common.sh@186 -- # echo 512M 00:23:02.974 16:00:05 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:23:02.974 16:00:05 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:23:02.974 16:00:05 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:23:02.974 16:00:05 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:23:02.974 16:00:05 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:23:02.974 16:00:05 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:23:02.974 16:00:05 -- dd/uring.sh@41 -- # gen_bytes 1024 00:23:02.974 16:00:05 -- dd/common.sh@98 -- # xtrace_disable 00:23:02.974 16:00:05 -- common/autotest_common.sh@10 -- # set +x 00:23:02.975 16:00:05 -- dd/uring.sh@41 -- # magic=o06unzx5rimuzv2jqkbq7je89ao49mfrgyiajq372cxjyuglksg15ms04r25qt4uz3elp46duzq3tzmkm0dtvuomis3elz7pg4rk51zariqioyyqmwikxl2dur4rfz7cao6ukycpyvzq5on2bcqszotgshtpz6ggzxvw1j1kikek6xdttp4cgfr9uo4b3k45zxtprn8yj67fwnkfwzk6x7lwquzpclc2hte4q33wtflp0gqhs9veehtul9l1o0ldot62ietq9ic5woucl2urxspcrh9lnracjcgaxwnks1fcdgfn20njlvlyj48ic5jdhno3c7cm77q7y3qfkkskmgbwcvah0352gehgfr22n0qm9xgekck03ysx1ur3n6dawn37y0arbdz7r13jqi30y0b99zavcr07ek87dplctuhd51l15ahhh6whua9gh1acagwlwv4q7db9add5lmy6ynbigq0z4432ugnw7elcfkkoqojfamrzr1chcpx2n1aw37gdn3nre627qd2kpn4o361lmgvgu6y0lqh7sbk5ta9efhs61fh6jls8vszvxw6rlgu9bysi5mhclpd9w9mrxpba0ttngn1e80d0ptbmcmtvc6zc3uv5brb4bcdsthug9lsz9sucvv8n7p7bm0wefhx86w9o03haemfn49jzggfekvbjhuspx1gsca399t12nr1vljjpwudggpj1ny9et9d88navgf62ymfq54izgoxww34xjbbrjrajueqc6o26kdvr98v4bcuvei9rjfhbdfcsbda2col9t02baoh18snsablzoiam6rfsxa0ldrnlrnbdo1mac1j8vyzv7gk3yn4q3qovxj7ujhsbbxemkbep8m56fi4yivymkhre71koa3qwslk74dryt6qjnfx9kkpc6aj71gbovoc4mfgzhkfolyhmbbguhe9eiwf8ekmzq209bkmuyu1euwpd7gl2rrvpscvxgaedg765g0iiezixy236toker59wpyuk0cpa 00:23:02.975 16:00:05 -- dd/uring.sh@42 -- # echo o06unzx5rimuzv2jqkbq7je89ao49mfrgyiajq372cxjyuglksg15ms04r25qt4uz3elp46duzq3tzmkm0dtvuomis3elz7pg4rk51zariqioyyqmwikxl2dur4rfz7cao6ukycpyvzq5on2bcqszotgshtpz6ggzxvw1j1kikek6xdttp4cgfr9uo4b3k45zxtprn8yj67fwnkfwzk6x7lwquzpclc2hte4q33wtflp0gqhs9veehtul9l1o0ldot62ietq9ic5woucl2urxspcrh9lnracjcgaxwnks1fcdgfn20njlvlyj48ic5jdhno3c7cm77q7y3qfkkskmgbwcvah0352gehgfr22n0qm9xgekck03ysx1ur3n6dawn37y0arbdz7r13jqi30y0b99zavcr07ek87dplctuhd51l15ahhh6whua9gh1acagwlwv4q7db9add5lmy6ynbigq0z4432ugnw7elcfkkoqojfamrzr1chcpx2n1aw37gdn3nre627qd2kpn4o361lmgvgu6y0lqh7sbk5ta9efhs61fh6jls8vszvxw6rlgu9bysi5mhclpd9w9mrxpba0ttngn1e80d0ptbmcmtvc6zc3uv5brb4bcdsthug9lsz9sucvv8n7p7bm0wefhx86w9o03haemfn49jzggfekvbjhuspx1gsca399t12nr1vljjpwudggpj1ny9et9d88navgf62ymfq54izgoxww34xjbbrjrajueqc6o26kdvr98v4bcuvei9rjfhbdfcsbda2col9t02baoh18snsablzoiam6rfsxa0ldrnlrnbdo1mac1j8vyzv7gk3yn4q3qovxj7ujhsbbxemkbep8m56fi4yivymkhre71koa3qwslk74dryt6qjnfx9kkpc6aj71gbovoc4mfgzhkfolyhmbbguhe9eiwf8ekmzq209bkmuyu1euwpd7gl2rrvpscvxgaedg765g0iiezixy236toker59wpyuk0cpa 00:23:02.975 16:00:05 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:23:02.975 [2024-07-22 16:00:05.655202] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:02.975 [2024-07-22 16:00:05.655301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58913 ] 00:23:02.975 [2024-07-22 16:00:05.786078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.232 [2024-07-22 16:00:05.871060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.810  Copying: 511/511 [MB] (average 1372 MBps) 00:23:03.810 00:23:03.810 16:00:06 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:23:04.068 16:00:06 -- dd/uring.sh@54 -- # gen_conf 00:23:04.068 16:00:06 -- dd/common.sh@31 -- # xtrace_disable 00:23:04.068 16:00:06 -- common/autotest_common.sh@10 -- # set +x 00:23:04.068 [2024-07-22 16:00:06.733634] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:04.068 [2024-07-22 16:00:06.733754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58927 ] 00:23:04.068 { 00:23:04.068 "subsystems": [ 00:23:04.068 { 00:23:04.068 "subsystem": "bdev", 00:23:04.068 "config": [ 00:23:04.068 { 00:23:04.068 "params": { 00:23:04.068 "block_size": 512, 00:23:04.068 "num_blocks": 1048576, 00:23:04.068 "name": "malloc0" 00:23:04.068 }, 00:23:04.068 "method": "bdev_malloc_create" 00:23:04.068 }, 00:23:04.068 { 00:23:04.068 "params": { 00:23:04.068 "filename": "/dev/zram1", 00:23:04.068 "name": "uring0" 00:23:04.068 }, 00:23:04.068 "method": "bdev_uring_create" 00:23:04.068 }, 00:23:04.068 { 00:23:04.068 "method": "bdev_wait_for_examine" 00:23:04.068 } 00:23:04.068 ] 00:23:04.068 } 00:23:04.068 ] 00:23:04.068 } 00:23:04.068 [2024-07-22 16:00:06.879062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.325 [2024-07-22 16:00:06.946934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.471  Copying: 200/512 [MB] (200 MBps) Copying: 390/512 [MB] (189 MBps) Copying: 512/512 [MB] (average 195 MBps) 00:23:07.471 00:23:07.471 16:00:10 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:23:07.471 16:00:10 -- dd/uring.sh@60 -- # gen_conf 00:23:07.471 16:00:10 -- dd/common.sh@31 -- # xtrace_disable 00:23:07.471 16:00:10 -- common/autotest_common.sh@10 -- # set +x 00:23:07.471 [2024-07-22 16:00:10.076328] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:07.471 [2024-07-22 16:00:10.076412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58975 ] 00:23:07.471 { 00:23:07.471 "subsystems": [ 00:23:07.471 { 00:23:07.471 "subsystem": "bdev", 00:23:07.471 "config": [ 00:23:07.471 { 00:23:07.471 "params": { 00:23:07.471 "block_size": 512, 00:23:07.471 "num_blocks": 1048576, 00:23:07.471 "name": "malloc0" 00:23:07.471 }, 00:23:07.471 "method": "bdev_malloc_create" 00:23:07.471 }, 00:23:07.471 { 00:23:07.471 "params": { 00:23:07.471 "filename": "/dev/zram1", 00:23:07.471 "name": "uring0" 00:23:07.471 }, 00:23:07.471 "method": "bdev_uring_create" 00:23:07.471 }, 00:23:07.471 { 00:23:07.471 "method": "bdev_wait_for_examine" 00:23:07.471 } 00:23:07.471 ] 00:23:07.471 } 00:23:07.471 ] 00:23:07.471 } 00:23:07.471 [2024-07-22 16:00:10.213582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.471 [2024-07-22 16:00:10.271798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.968  Copying: 133/512 [MB] (133 MBps) Copying: 261/512 [MB] (127 MBps) Copying: 386/512 [MB] (125 MBps) Copying: 504/512 [MB] (117 MBps) Copying: 512/512 [MB] (average 126 MBps) 00:23:11.968 00:23:11.968 16:00:14 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:23:11.968 16:00:14 -- dd/uring.sh@66 -- # [[ o06unzx5rimuzv2jqkbq7je89ao49mfrgyiajq372cxjyuglksg15ms04r25qt4uz3elp46duzq3tzmkm0dtvuomis3elz7pg4rk51zariqioyyqmwikxl2dur4rfz7cao6ukycpyvzq5on2bcqszotgshtpz6ggzxvw1j1kikek6xdttp4cgfr9uo4b3k45zxtprn8yj67fwnkfwzk6x7lwquzpclc2hte4q33wtflp0gqhs9veehtul9l1o0ldot62ietq9ic5woucl2urxspcrh9lnracjcgaxwnks1fcdgfn20njlvlyj48ic5jdhno3c7cm77q7y3qfkkskmgbwcvah0352gehgfr22n0qm9xgekck03ysx1ur3n6dawn37y0arbdz7r13jqi30y0b99zavcr07ek87dplctuhd51l15ahhh6whua9gh1acagwlwv4q7db9add5lmy6ynbigq0z4432ugnw7elcfkkoqojfamrzr1chcpx2n1aw37gdn3nre627qd2kpn4o361lmgvgu6y0lqh7sbk5ta9efhs61fh6jls8vszvxw6rlgu9bysi5mhclpd9w9mrxpba0ttngn1e80d0ptbmcmtvc6zc3uv5brb4bcdsthug9lsz9sucvv8n7p7bm0wefhx86w9o03haemfn49jzggfekvbjhuspx1gsca399t12nr1vljjpwudggpj1ny9et9d88navgf62ymfq54izgoxww34xjbbrjrajueqc6o26kdvr98v4bcuvei9rjfhbdfcsbda2col9t02baoh18snsablzoiam6rfsxa0ldrnlrnbdo1mac1j8vyzv7gk3yn4q3qovxj7ujhsbbxemkbep8m56fi4yivymkhre71koa3qwslk74dryt6qjnfx9kkpc6aj71gbovoc4mfgzhkfolyhmbbguhe9eiwf8ekmzq209bkmuyu1euwpd7gl2rrvpscvxgaedg765g0iiezixy236toker59wpyuk0cpa == \o\0\6\u\n\z\x\5\r\i\m\u\z\v\2\j\q\k\b\q\7\j\e\8\9\a\o\4\9\m\f\r\g\y\i\a\j\q\3\7\2\c\x\j\y\u\g\l\k\s\g\1\5\m\s\0\4\r\2\5\q\t\4\u\z\3\e\l\p\4\6\d\u\z\q\3\t\z\m\k\m\0\d\t\v\u\o\m\i\s\3\e\l\z\7\p\g\4\r\k\5\1\z\a\r\i\q\i\o\y\y\q\m\w\i\k\x\l\2\d\u\r\4\r\f\z\7\c\a\o\6\u\k\y\c\p\y\v\z\q\5\o\n\2\b\c\q\s\z\o\t\g\s\h\t\p\z\6\g\g\z\x\v\w\1\j\1\k\i\k\e\k\6\x\d\t\t\p\4\c\g\f\r\9\u\o\4\b\3\k\4\5\z\x\t\p\r\n\8\y\j\6\7\f\w\n\k\f\w\z\k\6\x\7\l\w\q\u\z\p\c\l\c\2\h\t\e\4\q\3\3\w\t\f\l\p\0\g\q\h\s\9\v\e\e\h\t\u\l\9\l\1\o\0\l\d\o\t\6\2\i\e\t\q\9\i\c\5\w\o\u\c\l\2\u\r\x\s\p\c\r\h\9\l\n\r\a\c\j\c\g\a\x\w\n\k\s\1\f\c\d\g\f\n\2\0\n\j\l\v\l\y\j\4\8\i\c\5\j\d\h\n\o\3\c\7\c\m\7\7\q\7\y\3\q\f\k\k\s\k\m\g\b\w\c\v\a\h\0\3\5\2\g\e\h\g\f\r\2\2\n\0\q\m\9\x\g\e\k\c\k\0\3\y\s\x\1\u\r\3\n\6\d\a\w\n\3\7\y\0\a\r\b\d\z\7\r\1\3\j\q\i\3\0\y\0\b\9\9\z\a\v\c\r\0\7\e\k\8\7\d\p\l\c\t\u\h\d\5\1\l\1\5\a\h\h\h\6\w\h\u\a\9\g\h\1\a\c\a\g\w\l\w\v\4\q\7\d\b\9\a\d\d\5\l\m\y\6\y\n\b\i\g\q\0\z\4\4\3\2\u\g\n\w\7\e\l\c\f\k\k\o\q\o\j\f\a\m\r\z\r\1\c\h\c\p\x\2\n\1\a\w\3\7\g\d\n\3\n\r\e\6\2\7\q\d\2\k\p\n\4\o\3\6\1\l\m\g\v\g\u\6\y\0\l\q\h\7\s\b\k\5\t\a\9\e\f\h\s\6\1\f\h\6\j\l\s\8\v\s\z\v\x\w\6\r\l\g\u\9\b\y\s\i\5\m\h\c\l\p\d\9\w\9\m\r\x\p\b\a\0\t\t\n\g\n\1\e\8\0\d\0\p\t\b\m\c\m\t\v\c\6\z\c\3\u\v\5\b\r\b\4\b\c\d\s\t\h\u\g\9\l\s\z\9\s\u\c\v\v\8\n\7\p\7\b\m\0\w\e\f\h\x\8\6\w\9\o\0\3\h\a\e\m\f\n\4\9\j\z\g\g\f\e\k\v\b\j\h\u\s\p\x\1\g\s\c\a\3\9\9\t\1\2\n\r\1\v\l\j\j\p\w\u\d\g\g\p\j\1\n\y\9\e\t\9\d\8\8\n\a\v\g\f\6\2\y\m\f\q\5\4\i\z\g\o\x\w\w\3\4\x\j\b\b\r\j\r\a\j\u\e\q\c\6\o\2\6\k\d\v\r\9\8\v\4\b\c\u\v\e\i\9\r\j\f\h\b\d\f\c\s\b\d\a\2\c\o\l\9\t\0\2\b\a\o\h\1\8\s\n\s\a\b\l\z\o\i\a\m\6\r\f\s\x\a\0\l\d\r\n\l\r\n\b\d\o\1\m\a\c\1\j\8\v\y\z\v\7\g\k\3\y\n\4\q\3\q\o\v\x\j\7\u\j\h\s\b\b\x\e\m\k\b\e\p\8\m\5\6\f\i\4\y\i\v\y\m\k\h\r\e\7\1\k\o\a\3\q\w\s\l\k\7\4\d\r\y\t\6\q\j\n\f\x\9\k\k\p\c\6\a\j\7\1\g\b\o\v\o\c\4\m\f\g\z\h\k\f\o\l\y\h\m\b\b\g\u\h\e\9\e\i\w\f\8\e\k\m\z\q\2\0\9\b\k\m\u\y\u\1\e\u\w\p\d\7\g\l\2\r\r\v\p\s\c\v\x\g\a\e\d\g\7\6\5\g\0\i\i\e\z\i\x\y\2\3\6\t\o\k\e\r\5\9\w\p\y\u\k\0\c\p\a ]] 00:23:11.968 16:00:14 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:23:11.969 16:00:14 -- dd/uring.sh@69 -- # [[ o06unzx5rimuzv2jqkbq7je89ao49mfrgyiajq372cxjyuglksg15ms04r25qt4uz3elp46duzq3tzmkm0dtvuomis3elz7pg4rk51zariqioyyqmwikxl2dur4rfz7cao6ukycpyvzq5on2bcqszotgshtpz6ggzxvw1j1kikek6xdttp4cgfr9uo4b3k45zxtprn8yj67fwnkfwzk6x7lwquzpclc2hte4q33wtflp0gqhs9veehtul9l1o0ldot62ietq9ic5woucl2urxspcrh9lnracjcgaxwnks1fcdgfn20njlvlyj48ic5jdhno3c7cm77q7y3qfkkskmgbwcvah0352gehgfr22n0qm9xgekck03ysx1ur3n6dawn37y0arbdz7r13jqi30y0b99zavcr07ek87dplctuhd51l15ahhh6whua9gh1acagwlwv4q7db9add5lmy6ynbigq0z4432ugnw7elcfkkoqojfamrzr1chcpx2n1aw37gdn3nre627qd2kpn4o361lmgvgu6y0lqh7sbk5ta9efhs61fh6jls8vszvxw6rlgu9bysi5mhclpd9w9mrxpba0ttngn1e80d0ptbmcmtvc6zc3uv5brb4bcdsthug9lsz9sucvv8n7p7bm0wefhx86w9o03haemfn49jzggfekvbjhuspx1gsca399t12nr1vljjpwudggpj1ny9et9d88navgf62ymfq54izgoxww34xjbbrjrajueqc6o26kdvr98v4bcuvei9rjfhbdfcsbda2col9t02baoh18snsablzoiam6rfsxa0ldrnlrnbdo1mac1j8vyzv7gk3yn4q3qovxj7ujhsbbxemkbep8m56fi4yivymkhre71koa3qwslk74dryt6qjnfx9kkpc6aj71gbovoc4mfgzhkfolyhmbbguhe9eiwf8ekmzq209bkmuyu1euwpd7gl2rrvpscvxgaedg765g0iiezixy236toker59wpyuk0cpa == \o\0\6\u\n\z\x\5\r\i\m\u\z\v\2\j\q\k\b\q\7\j\e\8\9\a\o\4\9\m\f\r\g\y\i\a\j\q\3\7\2\c\x\j\y\u\g\l\k\s\g\1\5\m\s\0\4\r\2\5\q\t\4\u\z\3\e\l\p\4\6\d\u\z\q\3\t\z\m\k\m\0\d\t\v\u\o\m\i\s\3\e\l\z\7\p\g\4\r\k\5\1\z\a\r\i\q\i\o\y\y\q\m\w\i\k\x\l\2\d\u\r\4\r\f\z\7\c\a\o\6\u\k\y\c\p\y\v\z\q\5\o\n\2\b\c\q\s\z\o\t\g\s\h\t\p\z\6\g\g\z\x\v\w\1\j\1\k\i\k\e\k\6\x\d\t\t\p\4\c\g\f\r\9\u\o\4\b\3\k\4\5\z\x\t\p\r\n\8\y\j\6\7\f\w\n\k\f\w\z\k\6\x\7\l\w\q\u\z\p\c\l\c\2\h\t\e\4\q\3\3\w\t\f\l\p\0\g\q\h\s\9\v\e\e\h\t\u\l\9\l\1\o\0\l\d\o\t\6\2\i\e\t\q\9\i\c\5\w\o\u\c\l\2\u\r\x\s\p\c\r\h\9\l\n\r\a\c\j\c\g\a\x\w\n\k\s\1\f\c\d\g\f\n\2\0\n\j\l\v\l\y\j\4\8\i\c\5\j\d\h\n\o\3\c\7\c\m\7\7\q\7\y\3\q\f\k\k\s\k\m\g\b\w\c\v\a\h\0\3\5\2\g\e\h\g\f\r\2\2\n\0\q\m\9\x\g\e\k\c\k\0\3\y\s\x\1\u\r\3\n\6\d\a\w\n\3\7\y\0\a\r\b\d\z\7\r\1\3\j\q\i\3\0\y\0\b\9\9\z\a\v\c\r\0\7\e\k\8\7\d\p\l\c\t\u\h\d\5\1\l\1\5\a\h\h\h\6\w\h\u\a\9\g\h\1\a\c\a\g\w\l\w\v\4\q\7\d\b\9\a\d\d\5\l\m\y\6\y\n\b\i\g\q\0\z\4\4\3\2\u\g\n\w\7\e\l\c\f\k\k\o\q\o\j\f\a\m\r\z\r\1\c\h\c\p\x\2\n\1\a\w\3\7\g\d\n\3\n\r\e\6\2\7\q\d\2\k\p\n\4\o\3\6\1\l\m\g\v\g\u\6\y\0\l\q\h\7\s\b\k\5\t\a\9\e\f\h\s\6\1\f\h\6\j\l\s\8\v\s\z\v\x\w\6\r\l\g\u\9\b\y\s\i\5\m\h\c\l\p\d\9\w\9\m\r\x\p\b\a\0\t\t\n\g\n\1\e\8\0\d\0\p\t\b\m\c\m\t\v\c\6\z\c\3\u\v\5\b\r\b\4\b\c\d\s\t\h\u\g\9\l\s\z\9\s\u\c\v\v\8\n\7\p\7\b\m\0\w\e\f\h\x\8\6\w\9\o\0\3\h\a\e\m\f\n\4\9\j\z\g\g\f\e\k\v\b\j\h\u\s\p\x\1\g\s\c\a\3\9\9\t\1\2\n\r\1\v\l\j\j\p\w\u\d\g\g\p\j\1\n\y\9\e\t\9\d\8\8\n\a\v\g\f\6\2\y\m\f\q\5\4\i\z\g\o\x\w\w\3\4\x\j\b\b\r\j\r\a\j\u\e\q\c\6\o\2\6\k\d\v\r\9\8\v\4\b\c\u\v\e\i\9\r\j\f\h\b\d\f\c\s\b\d\a\2\c\o\l\9\t\0\2\b\a\o\h\1\8\s\n\s\a\b\l\z\o\i\a\m\6\r\f\s\x\a\0\l\d\r\n\l\r\n\b\d\o\1\m\a\c\1\j\8\v\y\z\v\7\g\k\3\y\n\4\q\3\q\o\v\x\j\7\u\j\h\s\b\b\x\e\m\k\b\e\p\8\m\5\6\f\i\4\y\i\v\y\m\k\h\r\e\7\1\k\o\a\3\q\w\s\l\k\7\4\d\r\y\t\6\q\j\n\f\x\9\k\k\p\c\6\a\j\7\1\g\b\o\v\o\c\4\m\f\g\z\h\k\f\o\l\y\h\m\b\b\g\u\h\e\9\e\i\w\f\8\e\k\m\z\q\2\0\9\b\k\m\u\y\u\1\e\u\w\p\d\7\g\l\2\r\r\v\p\s\c\v\x\g\a\e\d\g\7\6\5\g\0\i\i\e\z\i\x\y\2\3\6\t\o\k\e\r\5\9\w\p\y\u\k\0\c\p\a ]] 00:23:11.969 16:00:14 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:23:12.534 16:00:15 -- dd/uring.sh@75 -- # gen_conf 00:23:12.534 16:00:15 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:23:12.534 16:00:15 -- dd/common.sh@31 -- # xtrace_disable 00:23:12.534 16:00:15 -- common/autotest_common.sh@10 -- # set +x 00:23:12.534 [2024-07-22 16:00:15.250207] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:12.534 [2024-07-22 16:00:15.250296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ] 00:23:12.534 { 00:23:12.534 "subsystems": [ 00:23:12.534 { 00:23:12.534 "subsystem": "bdev", 00:23:12.534 "config": [ 00:23:12.534 { 00:23:12.534 "params": { 00:23:12.534 "block_size": 512, 00:23:12.534 "num_blocks": 1048576, 00:23:12.534 "name": "malloc0" 00:23:12.534 }, 00:23:12.534 "method": "bdev_malloc_create" 00:23:12.534 }, 00:23:12.534 { 00:23:12.534 "params": { 00:23:12.534 "filename": "/dev/zram1", 00:23:12.534 "name": "uring0" 00:23:12.534 }, 00:23:12.534 "method": "bdev_uring_create" 00:23:12.534 }, 00:23:12.534 { 00:23:12.534 "method": "bdev_wait_for_examine" 00:23:12.534 } 00:23:12.534 ] 00:23:12.534 } 00:23:12.534 ] 00:23:12.534 } 00:23:12.534 [2024-07-22 16:00:15.384227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.791 [2024-07-22 16:00:15.465265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.616  Copying: 145/512 [MB] (145 MBps) Copying: 289/512 [MB] (144 MBps) Copying: 434/512 [MB] (144 MBps) Copying: 512/512 [MB] (average 144 MBps) 00:23:16.616 00:23:16.616 16:00:19 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:23:16.616 16:00:19 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:23:16.616 16:00:19 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:23:16.616 16:00:19 -- dd/uring.sh@87 -- # gen_conf 00:23:16.616 16:00:19 -- dd/common.sh@31 -- # xtrace_disable 00:23:16.616 16:00:19 -- common/autotest_common.sh@10 -- # set +x 00:23:16.616 16:00:19 -- dd/uring.sh@87 -- # : 00:23:16.616 16:00:19 -- dd/uring.sh@87 -- # : 00:23:16.889 [2024-07-22 16:00:19.494717] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:16.889 [2024-07-22 16:00:19.494804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59107 ] 00:23:16.889 { 00:23:16.889 "subsystems": [ 00:23:16.889 { 00:23:16.889 "subsystem": "bdev", 00:23:16.889 "config": [ 00:23:16.889 { 00:23:16.889 "params": { 00:23:16.889 "block_size": 512, 00:23:16.889 "num_blocks": 1048576, 00:23:16.889 "name": "malloc0" 00:23:16.889 }, 00:23:16.889 "method": "bdev_malloc_create" 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "params": { 00:23:16.889 "filename": "/dev/zram1", 00:23:16.889 "name": "uring0" 00:23:16.889 }, 00:23:16.889 "method": "bdev_uring_create" 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "params": { 00:23:16.889 "name": "uring0" 00:23:16.889 }, 00:23:16.889 "method": "bdev_uring_delete" 00:23:16.889 }, 00:23:16.889 { 00:23:16.889 "method": "bdev_wait_for_examine" 00:23:16.889 } 00:23:16.889 ] 00:23:16.889 } 00:23:16.889 ] 00:23:16.889 } 00:23:16.889 [2024-07-22 16:00:19.624656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.889 [2024-07-22 16:00:19.707956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.714  Copying: 0/0 [B] (average 0 Bps) 00:23:17.714 00:23:17.714 16:00:20 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:23:17.714 16:00:20 -- common/autotest_common.sh@640 -- # local es=0 00:23:17.714 16:00:20 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:23:17.714 16:00:20 -- dd/uring.sh@94 -- # gen_conf 00:23:17.714 16:00:20 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.714 16:00:20 -- dd/common.sh@31 -- # xtrace_disable 00:23:17.714 16:00:20 -- common/autotest_common.sh@10 -- # set +x 00:23:17.714 16:00:20 -- dd/uring.sh@94 -- # : 00:23:17.714 16:00:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:17.714 16:00:20 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.714 16:00:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:17.714 16:00:20 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.714 16:00:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:17.714 16:00:20 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.714 16:00:20 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:17.714 16:00:20 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:23:17.714 { 00:23:17.714 "subsystems": [ 00:23:17.714 { 00:23:17.714 "subsystem": "bdev", 00:23:17.714 "config": [ 00:23:17.714 { 00:23:17.714 "params": { 00:23:17.714 "block_size": 512, 00:23:17.714 "num_blocks": 1048576, 00:23:17.714 "name": "malloc0" 00:23:17.714 }, 00:23:17.714 "method": "bdev_malloc_create" 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "params": { 00:23:17.714 "filename": "/dev/zram1", 00:23:17.714 "name": "uring0" 00:23:17.714 }, 00:23:17.714 "method": "bdev_uring_create" 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "params": { 00:23:17.714 "name": "uring0" 00:23:17.714 }, 00:23:17.714 "method": "bdev_uring_delete" 00:23:17.714 }, 00:23:17.714 { 00:23:17.714 "method": "bdev_wait_for_examine" 00:23:17.714 } 00:23:17.714 ] 00:23:17.714 } 00:23:17.714 ] 00:23:17.714 } 00:23:17.714 [2024-07-22 16:00:20.340312] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:17.714 [2024-07-22 16:00:20.340454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:23:17.714 [2024-07-22 16:00:20.480823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.714 [2024-07-22 16:00:20.539814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.971 [2024-07-22 16:00:20.689334] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:23:17.971 [2024-07-22 16:00:20.689410] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:23:17.971 [2024-07-22 16:00:20.689428] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:23:17.971 [2024-07-22 16:00:20.689443] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:18.229 [2024-07-22 16:00:20.860194] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:18.229 16:00:20 -- common/autotest_common.sh@643 -- # es=237 00:23:18.229 16:00:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:18.229 16:00:20 -- common/autotest_common.sh@652 -- # es=109 00:23:18.229 16:00:20 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:18.229 16:00:20 -- common/autotest_common.sh@660 -- # es=1 00:23:18.229 16:00:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:18.229 16:00:20 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:23:18.229 16:00:20 -- dd/common.sh@172 -- # local id=1 00:23:18.229 16:00:20 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:23:18.229 16:00:20 -- dd/common.sh@176 -- # echo 1 00:23:18.229 16:00:20 -- dd/common.sh@177 -- # echo 1 00:23:18.229 16:00:20 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:23:18.498 00:23:18.498 real 0m15.746s 00:23:18.498 user 0m8.954s 00:23:18.498 sys 0m6.007s 00:23:18.498 16:00:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.498 16:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:18.498 ************************************ 00:23:18.498 END TEST dd_uring_copy 00:23:18.498 ************************************ 00:23:18.757 00:23:18.757 real 0m15.862s 00:23:18.757 user 0m8.990s 00:23:18.757 sys 0m6.087s 00:23:18.757 16:00:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.757 16:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:18.757 ************************************ 00:23:18.757 END TEST spdk_dd_uring 00:23:18.757 ************************************ 00:23:18.757 16:00:21 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:23:18.757 16:00:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:18.757 16:00:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:18.757 16:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:18.757 ************************************ 00:23:18.757 START TEST spdk_dd_sparse 00:23:18.757 ************************************ 00:23:18.757 16:00:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:23:18.757 * Looking for test storage... 00:23:18.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:23:18.757 16:00:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:18.757 16:00:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.757 16:00:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.757 16:00:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.757 16:00:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.757 16:00:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.757 16:00:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.757 16:00:21 -- paths/export.sh@5 -- # export PATH 00:23:18.757 16:00:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.757 16:00:21 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:23:18.757 16:00:21 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:23:18.757 16:00:21 -- dd/sparse.sh@110 -- # file1=file_zero1 00:23:18.757 16:00:21 -- dd/sparse.sh@111 -- # file2=file_zero2 00:23:18.757 16:00:21 -- dd/sparse.sh@112 -- # file3=file_zero3 00:23:18.757 16:00:21 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:23:18.757 16:00:21 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:23:18.757 16:00:21 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:23:18.757 16:00:21 -- dd/sparse.sh@118 -- # prepare 00:23:18.757 16:00:21 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:23:18.757 16:00:21 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:23:18.757 1+0 records in 00:23:18.757 1+0 records out 00:23:18.757 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00531592 s, 789 MB/s 00:23:18.757 16:00:21 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:23:18.757 1+0 records in 00:23:18.757 1+0 records out 00:23:18.757 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00510437 s, 822 MB/s 00:23:18.757 16:00:21 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:23:18.758 1+0 records in 00:23:18.758 1+0 records out 00:23:18.758 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00485136 s, 865 MB/s 00:23:18.758 16:00:21 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:23:18.758 16:00:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:18.758 16:00:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:18.758 16:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:18.758 ************************************ 00:23:18.758 START TEST dd_sparse_file_to_file 00:23:18.758 ************************************ 00:23:18.758 16:00:21 -- common/autotest_common.sh@1104 -- # file_to_file 00:23:18.758 16:00:21 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:23:18.758 16:00:21 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:23:18.758 16:00:21 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:23:18.758 16:00:21 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:23:18.758 16:00:21 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:23:18.758 16:00:21 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:23:18.758 16:00:21 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:23:18.758 16:00:21 -- dd/sparse.sh@41 -- # gen_conf 00:23:18.758 16:00:21 -- dd/common.sh@31 -- # xtrace_disable 00:23:18.758 16:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:18.758 [2024-07-22 16:00:21.574576] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:18.758 [2024-07-22 16:00:21.574719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59229 ] 00:23:18.758 { 00:23:18.758 "subsystems": [ 00:23:18.758 { 00:23:18.758 "subsystem": "bdev", 00:23:18.758 "config": [ 00:23:18.758 { 00:23:18.758 "params": { 00:23:18.758 "block_size": 4096, 00:23:18.758 "filename": "dd_sparse_aio_disk", 00:23:18.758 "name": "dd_aio" 00:23:18.758 }, 00:23:18.758 "method": "bdev_aio_create" 00:23:18.758 }, 00:23:18.758 { 00:23:18.758 "params": { 00:23:18.758 "lvs_name": "dd_lvstore", 00:23:18.758 "bdev_name": "dd_aio" 00:23:18.758 }, 00:23:18.758 "method": "bdev_lvol_create_lvstore" 00:23:18.758 }, 00:23:18.758 { 00:23:18.758 "method": "bdev_wait_for_examine" 00:23:18.758 } 00:23:18.758 ] 00:23:18.758 } 00:23:18.758 ] 00:23:18.758 } 00:23:19.016 [2024-07-22 16:00:21.713345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.016 [2024-07-22 16:00:21.786201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.274  Copying: 12/36 [MB] (average 1333 MBps) 00:23:19.274 00:23:19.274 16:00:22 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:23:19.274 16:00:22 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:23:19.274 16:00:22 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:23:19.274 16:00:22 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:23:19.274 16:00:22 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:23:19.274 16:00:22 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:23:19.274 16:00:22 -- dd/sparse.sh@52 -- # stat1_b=24576 00:23:19.274 16:00:22 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:23:19.274 16:00:22 -- dd/sparse.sh@53 -- # stat2_b=24576 00:23:19.274 16:00:22 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:23:19.274 00:23:19.274 real 0m0.605s 00:23:19.274 user 0m0.386s 00:23:19.274 sys 0m0.144s 00:23:19.274 16:00:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.274 16:00:22 -- common/autotest_common.sh@10 -- # set +x 00:23:19.274 ************************************ 00:23:19.274 END TEST dd_sparse_file_to_file 00:23:19.274 ************************************ 00:23:19.543 16:00:22 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:23:19.544 16:00:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:19.544 16:00:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:19.544 16:00:22 -- common/autotest_common.sh@10 -- # set +x 00:23:19.544 ************************************ 00:23:19.544 START TEST dd_sparse_file_to_bdev 00:23:19.544 ************************************ 00:23:19.544 16:00:22 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:23:19.544 16:00:22 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:23:19.544 16:00:22 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:23:19.544 16:00:22 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:23:19.544 16:00:22 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:23:19.544 16:00:22 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:23:19.544 16:00:22 -- dd/sparse.sh@73 -- # gen_conf 00:23:19.544 16:00:22 -- dd/common.sh@31 -- # xtrace_disable 00:23:19.544 16:00:22 -- common/autotest_common.sh@10 -- # set +x 00:23:19.544 [2024-07-22 16:00:22.216116] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:19.544 [2024-07-22 16:00:22.216248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:23:19.544 { 00:23:19.544 "subsystems": [ 00:23:19.544 { 00:23:19.544 "subsystem": "bdev", 00:23:19.544 "config": [ 00:23:19.544 { 00:23:19.544 "params": { 00:23:19.544 "block_size": 4096, 00:23:19.544 "filename": "dd_sparse_aio_disk", 00:23:19.544 "name": "dd_aio" 00:23:19.544 }, 00:23:19.544 "method": "bdev_aio_create" 00:23:19.544 }, 00:23:19.544 { 00:23:19.544 "params": { 00:23:19.544 "lvs_name": "dd_lvstore", 00:23:19.544 "lvol_name": "dd_lvol", 00:23:19.544 "size": 37748736, 00:23:19.544 "thin_provision": true 00:23:19.544 }, 00:23:19.544 "method": "bdev_lvol_create" 00:23:19.544 }, 00:23:19.544 { 00:23:19.544 "method": "bdev_wait_for_examine" 00:23:19.544 } 00:23:19.544 ] 00:23:19.544 } 00:23:19.544 ] 00:23:19.544 } 00:23:19.544 [2024-07-22 16:00:22.355917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.804 [2024-07-22 16:00:22.414342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.804 [2024-07-22 16:00:22.470864] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:23:19.804  Copying: 12/36 [MB] (average 631 MBps)[2024-07-22 16:00:22.506523] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:23:20.062 00:23:20.062 00:23:20.062 00:23:20.062 real 0m0.573s 00:23:20.062 user 0m0.374s 00:23:20.062 sys 0m0.124s 00:23:20.062 16:00:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:20.062 16:00:22 -- common/autotest_common.sh@10 -- # set +x 00:23:20.062 ************************************ 00:23:20.062 END TEST dd_sparse_file_to_bdev 00:23:20.062 ************************************ 00:23:20.062 16:00:22 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:23:20.062 16:00:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:20.062 16:00:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:20.062 16:00:22 -- common/autotest_common.sh@10 -- # set +x 00:23:20.062 ************************************ 00:23:20.062 START TEST dd_sparse_bdev_to_file 00:23:20.062 ************************************ 00:23:20.062 16:00:22 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:23:20.062 16:00:22 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:23:20.062 16:00:22 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:23:20.062 16:00:22 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:23:20.062 16:00:22 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:23:20.062 16:00:22 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:23:20.062 16:00:22 -- dd/sparse.sh@91 -- # gen_conf 00:23:20.062 16:00:22 -- dd/common.sh@31 -- # xtrace_disable 00:23:20.062 16:00:22 -- common/autotest_common.sh@10 -- # set +x 00:23:20.062 [2024-07-22 16:00:22.834264] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:20.062 [2024-07-22 16:00:22.834399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59301 ] 00:23:20.062 { 00:23:20.062 "subsystems": [ 00:23:20.062 { 00:23:20.062 "subsystem": "bdev", 00:23:20.062 "config": [ 00:23:20.062 { 00:23:20.062 "params": { 00:23:20.062 "block_size": 4096, 00:23:20.062 "filename": "dd_sparse_aio_disk", 00:23:20.062 "name": "dd_aio" 00:23:20.062 }, 00:23:20.062 "method": "bdev_aio_create" 00:23:20.062 }, 00:23:20.062 { 00:23:20.062 "method": "bdev_wait_for_examine" 00:23:20.062 } 00:23:20.062 ] 00:23:20.062 } 00:23:20.062 ] 00:23:20.062 } 00:23:20.320 [2024-07-22 16:00:22.966928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.320 [2024-07-22 16:00:23.032749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.577  Copying: 12/36 [MB] (average 1200 MBps) 00:23:20.577 00:23:20.577 16:00:23 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:23:20.577 16:00:23 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:23:20.577 16:00:23 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:23:20.577 16:00:23 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:23:20.577 16:00:23 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:23:20.577 16:00:23 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:23:20.577 16:00:23 -- dd/sparse.sh@102 -- # stat2_b=24576 00:23:20.577 16:00:23 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:23:20.577 16:00:23 -- dd/sparse.sh@103 -- # stat3_b=24576 00:23:20.577 16:00:23 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:23:20.577 00:23:20.577 real 0m0.594s 00:23:20.577 user 0m0.373s 00:23:20.577 sys 0m0.138s 00:23:20.577 16:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:20.577 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:20.577 ************************************ 00:23:20.577 END TEST dd_sparse_bdev_to_file 00:23:20.577 ************************************ 00:23:20.577 16:00:23 -- dd/sparse.sh@1 -- # cleanup 00:23:20.577 16:00:23 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:23:20.577 16:00:23 -- dd/sparse.sh@12 -- # rm file_zero1 00:23:20.577 16:00:23 -- dd/sparse.sh@13 -- # rm file_zero2 00:23:20.577 16:00:23 -- dd/sparse.sh@14 -- # rm file_zero3 00:23:20.577 00:23:20.577 real 0m2.013s 00:23:20.577 user 0m1.207s 00:23:20.577 sys 0m0.564s 00:23:20.577 16:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:20.577 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:20.577 ************************************ 00:23:20.577 END TEST spdk_dd_sparse 00:23:20.577 ************************************ 00:23:20.835 16:00:23 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:23:20.835 16:00:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:20.835 16:00:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:20.835 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:20.835 ************************************ 00:23:20.835 START TEST spdk_dd_negative 00:23:20.835 ************************************ 00:23:20.835 16:00:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:23:20.835 * Looking for test storage... 00:23:20.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:23:20.835 16:00:23 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:20.835 16:00:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.835 16:00:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.835 16:00:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.835 16:00:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.836 16:00:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.836 16:00:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.836 16:00:23 -- paths/export.sh@5 -- # export PATH 00:23:20.836 16:00:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.836 16:00:23 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:20.836 16:00:23 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:20.836 16:00:23 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:20.836 16:00:23 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:20.836 16:00:23 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:23:20.836 16:00:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:20.836 16:00:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:20.836 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:20.836 ************************************ 00:23:20.836 START TEST dd_invalid_arguments 00:23:20.836 ************************************ 00:23:20.836 16:00:23 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:23:20.836 16:00:23 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:23:20.836 16:00:23 -- common/autotest_common.sh@640 -- # local es=0 00:23:20.836 16:00:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:23:20.836 16:00:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.836 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:20.836 16:00:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.836 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:20.836 16:00:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.836 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:20.836 16:00:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.836 16:00:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:20.836 16:00:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:23:20.836 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:23:20.836 options: 00:23:20.836 -c, --config JSON config file (default none) 00:23:20.836 --json JSON config file (default none) 00:23:20.836 --json-ignore-init-errors 00:23:20.836 don't exit on invalid config entry 00:23:20.836 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:23:20.836 -g, --single-file-segments 00:23:20.836 force creating just one hugetlbfs file 00:23:20.836 -h, --help show this usage 00:23:20.836 -i, --shm-id shared memory ID (optional) 00:23:20.836 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:23:20.836 --lcores lcore to CPU mapping list. The list is in the format: 00:23:20.836 [<,lcores[@CPUs]>...] 00:23:20.836 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:23:20.836 Within the group, '-' is used for range separator, 00:23:20.836 ',' is used for single number separator. 00:23:20.836 '( )' can be omitted for single element group, 00:23:20.836 '@' can be omitted if cpus and lcores have the same value 00:23:20.836 -n, --mem-channels channel number of memory channels used for DPDK 00:23:20.836 -p, --main-core main (primary) core for DPDK 00:23:20.836 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:23:20.836 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:23:20.836 --disable-cpumask-locks Disable CPU core lock files. 00:23:20.836 --silence-noticelog disable notice level logging to stderr 00:23:20.836 --msg-mempool-size global message memory pool size in count (default: 262143) 00:23:20.836 -u, --no-pci disable PCI access 00:23:20.836 --wait-for-rpc wait for RPCs to initialize subsystems 00:23:20.836 --max-delay maximum reactor delay (in microseconds) 00:23:20.836 -B, --pci-blocked pci addr to block (can be used more than once) 00:23:20.836 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:23:20.836 -R, --huge-unlink unlink huge files after initialization 00:23:20.836 -v, --version print SPDK version 00:23:20.836 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:23:20.836 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:23:20.836 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:23:20.836 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:23:20.836 Tracepoints vary in size and can use more than one trace entry. 00:23:20.836 --rpcs-allowed comma-separated list of permitted RPCS 00:23:20.836 --env-context Opaque context for use of the env implementation 00:23:20.836 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:23:20.836 --no-huge run without using hugepages 00:23:20.836 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:23:20.836 -e, --tpoint-group [:] 00:23:20.836 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:23:20.836 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:23:20.836 [2024-07-22 16:00:23.603796] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:23:20.836 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:23:20.836 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:23:20.836 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:23:20.836 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:23:20.836 [--------- DD Options ---------] 00:23:20.836 --if Input file. Must specify either --if or --ib. 00:23:20.836 --ib Input bdev. Must specifier either --if or --ib 00:23:20.836 --of Output file. Must specify either --of or --ob. 00:23:20.836 --ob Output bdev. Must specify either --of or --ob. 00:23:20.836 --iflag Input file flags. 00:23:20.836 --oflag Output file flags. 00:23:20.836 --bs I/O unit size (default: 4096) 00:23:20.836 --qd Queue depth (default: 2) 00:23:20.836 --count I/O unit count. The number of I/O units to copy. (default: all) 00:23:20.836 --skip Skip this many I/O units at start of input. (default: 0) 00:23:20.836 --seek Skip this many I/O units at start of output. (default: 0) 00:23:20.836 --aio Force usage of AIO. (by default io_uring is used if available) 00:23:20.836 --sparse Enable hole skipping in input target 00:23:20.836 Available iflag and oflag values: 00:23:20.836 append - append mode 00:23:20.836 direct - use direct I/O for data 00:23:20.836 directory - fail unless a directory 00:23:20.836 dsync - use synchronized I/O for data 00:23:20.836 noatime - do not update access time 00:23:20.836 noctty - do not assign controlling terminal from file 00:23:20.836 nofollow - do not follow symlinks 00:23:20.836 nonblock - use non-blocking I/O 00:23:20.836 sync - use synchronized I/O for data and metadata 00:23:20.836 16:00:23 -- common/autotest_common.sh@643 -- # es=2 00:23:20.836 16:00:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:20.836 16:00:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:20.836 16:00:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:20.836 00:23:20.836 real 0m0.087s 00:23:20.837 user 0m0.050s 00:23:20.837 sys 0m0.035s 00:23:20.837 16:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:20.837 ************************************ 00:23:20.837 END TEST dd_invalid_arguments 00:23:20.837 ************************************ 00:23:20.837 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:20.837 16:00:23 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:23:20.837 16:00:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:20.837 16:00:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:20.837 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:20.837 ************************************ 00:23:20.837 START TEST dd_double_input 00:23:20.837 ************************************ 00:23:20.837 16:00:23 -- common/autotest_common.sh@1104 -- # double_input 00:23:20.837 16:00:23 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:23:20.837 16:00:23 -- common/autotest_common.sh@640 -- # local es=0 00:23:20.837 16:00:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:23:20.837 16:00:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.837 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:20.837 16:00:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.837 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:20.837 16:00:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.837 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:20.837 16:00:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.837 16:00:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:20.837 16:00:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:23:21.096 [2024-07-22 16:00:23.732721] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:23:21.096 16:00:23 -- common/autotest_common.sh@643 -- # es=22 00:23:21.096 16:00:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:21.096 16:00:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:21.096 16:00:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:21.096 00:23:21.096 real 0m0.087s 00:23:21.096 user 0m0.058s 00:23:21.096 sys 0m0.028s 00:23:21.096 16:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.096 ************************************ 00:23:21.096 END TEST dd_double_input 00:23:21.096 ************************************ 00:23:21.096 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:21.096 16:00:23 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:23:21.096 16:00:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:21.096 16:00:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:21.096 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:21.096 ************************************ 00:23:21.096 START TEST dd_double_output 00:23:21.096 ************************************ 00:23:21.096 16:00:23 -- common/autotest_common.sh@1104 -- # double_output 00:23:21.096 16:00:23 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:23:21.096 16:00:23 -- common/autotest_common.sh@640 -- # local es=0 00:23:21.096 16:00:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:23:21.096 16:00:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.096 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.096 16:00:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.096 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.096 16:00:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.096 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.096 16:00:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.096 16:00:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:21.096 16:00:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:23:21.096 [2024-07-22 16:00:23.853679] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:23:21.096 16:00:23 -- common/autotest_common.sh@643 -- # es=22 00:23:21.096 16:00:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:21.096 16:00:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:21.096 16:00:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:21.096 00:23:21.096 real 0m0.080s 00:23:21.096 user 0m0.048s 00:23:21.096 sys 0m0.030s 00:23:21.096 16:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.096 ************************************ 00:23:21.096 END TEST dd_double_output 00:23:21.096 ************************************ 00:23:21.096 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:21.096 16:00:23 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:23:21.096 16:00:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:21.096 16:00:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:21.096 16:00:23 -- common/autotest_common.sh@10 -- # set +x 00:23:21.096 ************************************ 00:23:21.096 START TEST dd_no_input 00:23:21.096 ************************************ 00:23:21.096 16:00:23 -- common/autotest_common.sh@1104 -- # no_input 00:23:21.096 16:00:23 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:23:21.096 16:00:23 -- common/autotest_common.sh@640 -- # local es=0 00:23:21.096 16:00:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:23:21.096 16:00:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.096 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.096 16:00:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.096 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.096 16:00:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.096 16:00:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.096 16:00:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.096 16:00:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:21.096 16:00:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:23:21.354 [2024-07-22 16:00:23.980402] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:23:21.354 16:00:24 -- common/autotest_common.sh@643 -- # es=22 00:23:21.354 16:00:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:21.354 16:00:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:21.354 16:00:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:21.354 00:23:21.354 real 0m0.087s 00:23:21.354 user 0m0.056s 00:23:21.354 sys 0m0.030s 00:23:21.354 16:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.354 ************************************ 00:23:21.354 16:00:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.354 END TEST dd_no_input 00:23:21.354 ************************************ 00:23:21.354 16:00:24 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:23:21.354 16:00:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:21.354 16:00:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:21.354 16:00:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.354 ************************************ 00:23:21.354 START TEST dd_no_output 00:23:21.354 ************************************ 00:23:21.354 16:00:24 -- common/autotest_common.sh@1104 -- # no_output 00:23:21.354 16:00:24 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:21.354 16:00:24 -- common/autotest_common.sh@640 -- # local es=0 00:23:21.354 16:00:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:21.354 16:00:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.354 16:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.354 16:00:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.354 16:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.354 16:00:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.354 16:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.354 16:00:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.354 16:00:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:21.354 16:00:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:21.354 [2024-07-22 16:00:24.105778] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:23:21.354 16:00:24 -- common/autotest_common.sh@643 -- # es=22 00:23:21.354 16:00:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:21.354 16:00:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:21.354 16:00:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:21.354 00:23:21.354 real 0m0.085s 00:23:21.354 user 0m0.053s 00:23:21.354 sys 0m0.031s 00:23:21.354 16:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.354 ************************************ 00:23:21.354 END TEST dd_no_output 00:23:21.354 ************************************ 00:23:21.354 16:00:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.354 16:00:24 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:23:21.354 16:00:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:21.354 16:00:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:21.354 16:00:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.354 ************************************ 00:23:21.354 START TEST dd_wrong_blocksize 00:23:21.354 ************************************ 00:23:21.354 16:00:24 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:23:21.354 16:00:24 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:23:21.354 16:00:24 -- common/autotest_common.sh@640 -- # local es=0 00:23:21.354 16:00:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:23:21.354 16:00:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.354 16:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.354 16:00:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.354 16:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.354 16:00:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.354 16:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.354 16:00:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.354 16:00:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:21.354 16:00:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:23:21.614 [2024-07-22 16:00:24.226274] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:23:21.614 16:00:24 -- common/autotest_common.sh@643 -- # es=22 00:23:21.614 16:00:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:21.614 16:00:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:21.614 16:00:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:21.614 00:23:21.614 real 0m0.071s 00:23:21.614 user 0m0.047s 00:23:21.614 sys 0m0.023s 00:23:21.614 16:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.614 ************************************ 00:23:21.614 END TEST dd_wrong_blocksize 00:23:21.614 ************************************ 00:23:21.614 16:00:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.614 16:00:24 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:23:21.614 16:00:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:21.614 16:00:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:21.614 16:00:24 -- common/autotest_common.sh@10 -- # set +x 00:23:21.614 ************************************ 00:23:21.614 START TEST dd_smaller_blocksize 00:23:21.614 ************************************ 00:23:21.614 16:00:24 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:23:21.614 16:00:24 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:23:21.614 16:00:24 -- common/autotest_common.sh@640 -- # local es=0 00:23:21.614 16:00:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:23:21.614 16:00:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.614 16:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.614 16:00:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.614 16:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.614 16:00:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.614 16:00:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:21.614 16:00:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.614 16:00:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:21.614 16:00:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:23:21.614 [2024-07-22 16:00:24.342324] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:21.614 [2024-07-22 16:00:24.342455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59516 ] 00:23:21.894 [2024-07-22 16:00:24.482053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.894 [2024-07-22 16:00:24.547515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.153 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:23:22.153 [2024-07-22 16:00:24.873099] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:23:22.153 [2024-07-22 16:00:24.873183] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:22.153 [2024-07-22 16:00:24.944562] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:22.411 16:00:25 -- common/autotest_common.sh@643 -- # es=244 00:23:22.411 16:00:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:22.411 16:00:25 -- common/autotest_common.sh@652 -- # es=116 00:23:22.411 16:00:25 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:22.411 16:00:25 -- common/autotest_common.sh@660 -- # es=1 00:23:22.411 16:00:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:22.411 00:23:22.411 real 0m0.786s 00:23:22.411 user 0m0.358s 00:23:22.411 sys 0m0.319s 00:23:22.411 16:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.411 ************************************ 00:23:22.411 END TEST dd_smaller_blocksize 00:23:22.411 ************************************ 00:23:22.411 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:22.411 16:00:25 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:23:22.411 16:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:22.411 16:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:22.411 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:22.411 ************************************ 00:23:22.411 START TEST dd_invalid_count 00:23:22.411 ************************************ 00:23:22.411 16:00:25 -- common/autotest_common.sh@1104 -- # invalid_count 00:23:22.411 16:00:25 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:23:22.411 16:00:25 -- common/autotest_common.sh@640 -- # local es=0 00:23:22.411 16:00:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:23:22.411 16:00:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.411 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.411 16:00:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.411 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.411 16:00:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.411 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.411 16:00:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.411 16:00:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:22.411 16:00:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:23:22.411 [2024-07-22 16:00:25.148981] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:23:22.411 ************************************ 00:23:22.411 END TEST dd_invalid_count 00:23:22.411 ************************************ 00:23:22.411 16:00:25 -- common/autotest_common.sh@643 -- # es=22 00:23:22.411 16:00:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:22.411 16:00:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:22.411 16:00:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:22.411 00:23:22.411 real 0m0.059s 00:23:22.411 user 0m0.039s 00:23:22.411 sys 0m0.019s 00:23:22.411 16:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.411 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:22.411 16:00:25 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:23:22.411 16:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:22.411 16:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:22.411 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:22.411 ************************************ 00:23:22.411 START TEST dd_invalid_oflag 00:23:22.411 ************************************ 00:23:22.411 16:00:25 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:23:22.411 16:00:25 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:23:22.412 16:00:25 -- common/autotest_common.sh@640 -- # local es=0 00:23:22.412 16:00:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:23:22.412 16:00:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.412 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.412 16:00:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.412 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.412 16:00:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.412 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.412 16:00:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.412 16:00:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:22.412 16:00:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:23:22.412 [2024-07-22 16:00:25.260578] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:23:22.670 16:00:25 -- common/autotest_common.sh@643 -- # es=22 00:23:22.670 16:00:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:22.670 ************************************ 00:23:22.670 END TEST dd_invalid_oflag 00:23:22.670 ************************************ 00:23:22.670 16:00:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:22.670 16:00:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:22.670 00:23:22.670 real 0m0.078s 00:23:22.670 user 0m0.049s 00:23:22.670 sys 0m0.027s 00:23:22.670 16:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.670 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:22.670 16:00:25 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:23:22.670 16:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:22.670 16:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:22.670 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:22.670 ************************************ 00:23:22.670 START TEST dd_invalid_iflag 00:23:22.670 ************************************ 00:23:22.670 16:00:25 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:23:22.670 16:00:25 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:23:22.670 16:00:25 -- common/autotest_common.sh@640 -- # local es=0 00:23:22.670 16:00:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:23:22.670 16:00:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.670 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.670 16:00:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.670 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.670 16:00:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.670 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.670 16:00:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.670 16:00:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:22.670 16:00:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:23:22.670 [2024-07-22 16:00:25.371802] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:23:22.670 ************************************ 00:23:22.670 END TEST dd_invalid_iflag 00:23:22.670 ************************************ 00:23:22.670 16:00:25 -- common/autotest_common.sh@643 -- # es=22 00:23:22.670 16:00:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:22.670 16:00:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:22.670 16:00:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:22.670 00:23:22.670 real 0m0.061s 00:23:22.670 user 0m0.039s 00:23:22.670 sys 0m0.020s 00:23:22.670 16:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.670 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:22.670 16:00:25 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:23:22.670 16:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:22.670 16:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:22.670 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:22.670 ************************************ 00:23:22.670 START TEST dd_unknown_flag 00:23:22.670 ************************************ 00:23:22.670 16:00:25 -- common/autotest_common.sh@1104 -- # unknown_flag 00:23:22.670 16:00:25 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:23:22.670 16:00:25 -- common/autotest_common.sh@640 -- # local es=0 00:23:22.670 16:00:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:23:22.670 16:00:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.670 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.670 16:00:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.670 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.670 16:00:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.670 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:22.670 16:00:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.670 16:00:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:22.670 16:00:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:23:22.670 [2024-07-22 16:00:25.490034] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:22.670 [2024-07-22 16:00:25.490132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59609 ] 00:23:22.928 [2024-07-22 16:00:25.623446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.928 [2024-07-22 16:00:25.684743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.928 [2024-07-22 16:00:25.731880] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:23:22.928 [2024-07-22 16:00:25.731952] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:23:22.928 [2024-07-22 16:00:25.731971] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:23:22.928 [2024-07-22 16:00:25.731990] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:23.185 [2024-07-22 16:00:25.798680] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:23.185 16:00:25 -- common/autotest_common.sh@643 -- # es=236 00:23:23.185 16:00:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:23.185 16:00:25 -- common/autotest_common.sh@652 -- # es=108 00:23:23.185 16:00:25 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:23.186 16:00:25 -- common/autotest_common.sh@660 -- # es=1 00:23:23.186 16:00:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:23.186 00:23:23.186 real 0m0.479s 00:23:23.186 user 0m0.271s 00:23:23.186 sys 0m0.101s 00:23:23.186 ************************************ 00:23:23.186 END TEST dd_unknown_flag 00:23:23.186 ************************************ 00:23:23.186 16:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:23.186 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:23.186 16:00:25 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:23:23.186 16:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:23.186 16:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:23.186 16:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:23.186 ************************************ 00:23:23.186 START TEST dd_invalid_json 00:23:23.186 ************************************ 00:23:23.186 16:00:25 -- common/autotest_common.sh@1104 -- # invalid_json 00:23:23.186 16:00:25 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:23:23.186 16:00:25 -- dd/negative_dd.sh@95 -- # : 00:23:23.186 16:00:25 -- common/autotest_common.sh@640 -- # local es=0 00:23:23.186 16:00:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:23:23.186 16:00:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.186 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:23.186 16:00:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.186 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:23.186 16:00:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.186 16:00:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:23.186 16:00:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.186 16:00:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:23.186 16:00:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:23:23.186 [2024-07-22 16:00:26.000351] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:23.186 [2024-07-22 16:00:26.000454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:23:23.443 [2024-07-22 16:00:26.136615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.443 [2024-07-22 16:00:26.195979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.443 [2024-07-22 16:00:26.196110] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:23:23.443 [2024-07-22 16:00:26.196130] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:23.443 [2024-07-22 16:00:26.196169] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:23.701 16:00:26 -- common/autotest_common.sh@643 -- # es=234 00:23:23.701 16:00:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:23.701 16:00:26 -- common/autotest_common.sh@652 -- # es=106 00:23:23.701 16:00:26 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:23.701 16:00:26 -- common/autotest_common.sh@660 -- # es=1 00:23:23.701 16:00:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:23.701 00:23:23.701 real 0m0.358s 00:23:23.701 user 0m0.197s 00:23:23.701 sys 0m0.059s 00:23:23.702 16:00:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:23.702 ************************************ 00:23:23.702 END TEST dd_invalid_json 00:23:23.702 ************************************ 00:23:23.702 16:00:26 -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 ************************************ 00:23:23.702 END TEST spdk_dd_negative 00:23:23.702 ************************************ 00:23:23.702 00:23:23.702 real 0m2.883s 00:23:23.702 user 0m1.460s 00:23:23.702 sys 0m1.057s 00:23:23.702 16:00:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:23.702 16:00:26 -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 ************************************ 00:23:23.702 END TEST spdk_dd 00:23:23.702 ************************************ 00:23:23.702 00:23:23.702 real 1m11.234s 00:23:23.702 user 0m45.590s 00:23:23.702 sys 0m16.270s 00:23:23.702 16:00:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:23.702 16:00:26 -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 16:00:26 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:23:23.702 16:00:26 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:23:23.702 16:00:26 -- spdk/autotest.sh@268 -- # timing_exit lib 00:23:23.702 16:00:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:23.702 16:00:26 -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 16:00:26 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:23:23.702 16:00:26 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:23:23.702 16:00:26 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:23:23.702 16:00:26 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:23:23.702 16:00:26 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:23:23.702 16:00:26 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:23:23.702 16:00:26 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:23:23.702 16:00:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:23.702 16:00:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:23.702 16:00:26 -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 ************************************ 00:23:23.702 START TEST nvmf_tcp 00:23:23.702 ************************************ 00:23:23.702 16:00:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:23:23.702 * Looking for test storage... 00:23:23.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:23:23.702 16:00:26 -- nvmf/nvmf.sh@10 -- # uname -s 00:23:23.702 16:00:26 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:23:23.702 16:00:26 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:23.702 16:00:26 -- nvmf/common.sh@7 -- # uname -s 00:23:23.702 16:00:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.702 16:00:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.702 16:00:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.702 16:00:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.702 16:00:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.702 16:00:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.702 16:00:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.702 16:00:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.702 16:00:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.702 16:00:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.702 16:00:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:23:23.702 16:00:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:23:23.702 16:00:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.702 16:00:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.702 16:00:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:23.702 16:00:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:23.702 16:00:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.702 16:00:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.702 16:00:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.702 16:00:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.702 16:00:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.702 16:00:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.702 16:00:26 -- paths/export.sh@5 -- # export PATH 00:23:23.702 16:00:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.702 16:00:26 -- nvmf/common.sh@46 -- # : 0 00:23:23.702 16:00:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:23.702 16:00:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:23.702 16:00:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:23.702 16:00:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.702 16:00:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.702 16:00:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:23.702 16:00:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:23.702 16:00:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:23.702 16:00:26 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:23.702 16:00:26 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:23:23.702 16:00:26 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:23:23.702 16:00:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:23.702 16:00:26 -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 16:00:26 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:23:23.702 16:00:26 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:23:23.702 16:00:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:23.702 16:00:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:23.702 16:00:26 -- common/autotest_common.sh@10 -- # set +x 00:23:23.702 ************************************ 00:23:23.702 START TEST nvmf_host_management 00:23:23.702 ************************************ 00:23:23.702 16:00:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:23:23.960 * Looking for test storage... 00:23:23.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:23.960 16:00:26 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:23.960 16:00:26 -- nvmf/common.sh@7 -- # uname -s 00:23:23.960 16:00:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.960 16:00:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.960 16:00:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.960 16:00:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.960 16:00:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.960 16:00:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.960 16:00:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.960 16:00:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.960 16:00:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.960 16:00:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.960 16:00:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:23:23.960 16:00:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:23:23.960 16:00:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.960 16:00:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.960 16:00:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:23.960 16:00:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:23.960 16:00:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.960 16:00:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.960 16:00:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.960 16:00:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.960 16:00:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.960 16:00:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.960 16:00:26 -- paths/export.sh@5 -- # export PATH 00:23:23.960 16:00:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.960 16:00:26 -- nvmf/common.sh@46 -- # : 0 00:23:23.961 16:00:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:23.961 16:00:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:23.961 16:00:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:23.961 16:00:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.961 16:00:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.961 16:00:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:23.961 16:00:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:23.961 16:00:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:23.961 16:00:26 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.961 16:00:26 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.961 16:00:26 -- target/host_management.sh@104 -- # nvmftestinit 00:23:23.961 16:00:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:23.961 16:00:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.961 16:00:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:23.961 16:00:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:23.961 16:00:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:23.961 16:00:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.961 16:00:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.961 16:00:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.961 16:00:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:23.961 16:00:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:23.961 16:00:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:23.961 16:00:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:23.961 16:00:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:23.961 16:00:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:23.961 16:00:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.961 16:00:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.961 16:00:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:23.961 16:00:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:23.961 16:00:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:23.961 16:00:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:23.961 16:00:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:23.961 16:00:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.961 16:00:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:23.961 16:00:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:23.961 16:00:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:23.961 16:00:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:23.961 16:00:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:23.961 Cannot find device "nvmf_init_br" 00:23:23.961 16:00:26 -- nvmf/common.sh@153 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:23.961 Cannot find device "nvmf_tgt_br" 00:23:23.961 16:00:26 -- nvmf/common.sh@154 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:23.961 Cannot find device "nvmf_tgt_br2" 00:23:23.961 16:00:26 -- nvmf/common.sh@155 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:23.961 Cannot find device "nvmf_init_br" 00:23:23.961 16:00:26 -- nvmf/common.sh@156 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:23.961 Cannot find device "nvmf_tgt_br" 00:23:23.961 16:00:26 -- nvmf/common.sh@157 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:23.961 Cannot find device "nvmf_tgt_br2" 00:23:23.961 16:00:26 -- nvmf/common.sh@158 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:23.961 Cannot find device "nvmf_br" 00:23:23.961 16:00:26 -- nvmf/common.sh@159 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:23.961 Cannot find device "nvmf_init_if" 00:23:23.961 16:00:26 -- nvmf/common.sh@160 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:23.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.961 16:00:26 -- nvmf/common.sh@161 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:23.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.961 16:00:26 -- nvmf/common.sh@162 -- # true 00:23:23.961 16:00:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:23.961 16:00:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:23.961 16:00:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:23.961 16:00:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:23.961 16:00:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:23.961 16:00:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:23.961 16:00:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:23.961 16:00:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:23.961 16:00:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:24.219 16:00:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:24.219 16:00:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:24.219 16:00:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:24.219 16:00:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:24.219 16:00:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:24.220 16:00:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:24.220 16:00:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:24.220 16:00:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:24.220 16:00:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:24.220 16:00:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:24.220 16:00:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:24.220 16:00:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:24.220 16:00:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:24.220 16:00:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:24.220 16:00:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:24.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:23:24.220 00:23:24.220 --- 10.0.0.2 ping statistics --- 00:23:24.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.220 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:23:24.220 16:00:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:24.220 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:24.220 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:23:24.220 00:23:24.220 --- 10.0.0.3 ping statistics --- 00:23:24.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.220 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:24.220 16:00:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:24.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:23:24.220 00:23:24.220 --- 10.0.0.1 ping statistics --- 00:23:24.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.220 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:24.220 16:00:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.220 16:00:27 -- nvmf/common.sh@421 -- # return 0 00:23:24.220 16:00:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:24.220 16:00:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.220 16:00:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:24.220 16:00:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:24.220 16:00:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.220 16:00:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:24.220 16:00:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:24.220 16:00:27 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:23:24.220 16:00:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:24.220 16:00:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:24.220 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:23:24.220 ************************************ 00:23:24.220 START TEST nvmf_host_management 00:23:24.220 ************************************ 00:23:24.220 16:00:27 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:23:24.220 16:00:27 -- target/host_management.sh@69 -- # starttarget 00:23:24.220 16:00:27 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:23:24.220 16:00:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:24.220 16:00:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:24.220 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:23:24.220 16:00:27 -- nvmf/common.sh@469 -- # nvmfpid=59890 00:23:24.220 16:00:27 -- nvmf/common.sh@470 -- # waitforlisten 59890 00:23:24.220 16:00:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:24.220 16:00:27 -- common/autotest_common.sh@819 -- # '[' -z 59890 ']' 00:23:24.220 16:00:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.220 16:00:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:24.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.220 16:00:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.220 16:00:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:24.220 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:23:24.478 [2024-07-22 16:00:27.129824] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:24.478 [2024-07-22 16:00:27.129960] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.478 [2024-07-22 16:00:27.276096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.736 [2024-07-22 16:00:27.379215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:24.736 [2024-07-22 16:00:27.379757] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.736 [2024-07-22 16:00:27.379798] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.736 [2024-07-22 16:00:27.379820] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.736 [2024-07-22 16:00:27.380002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.736 [2024-07-22 16:00:27.380144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.736 [2024-07-22 16:00:27.380774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:24.736 [2024-07-22 16:00:27.380819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.700 16:00:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:25.700 16:00:28 -- common/autotest_common.sh@852 -- # return 0 00:23:25.700 16:00:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:25.700 16:00:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:25.700 16:00:28 -- common/autotest_common.sh@10 -- # set +x 00:23:25.700 16:00:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.700 16:00:28 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.700 16:00:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.700 16:00:28 -- common/autotest_common.sh@10 -- # set +x 00:23:25.700 [2024-07-22 16:00:28.233242] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.700 16:00:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.700 16:00:28 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:23:25.700 16:00:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:25.700 16:00:28 -- common/autotest_common.sh@10 -- # set +x 00:23:25.700 16:00:28 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:23:25.700 16:00:28 -- target/host_management.sh@23 -- # cat 00:23:25.700 16:00:28 -- target/host_management.sh@30 -- # rpc_cmd 00:23:25.700 16:00:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.700 16:00:28 -- common/autotest_common.sh@10 -- # set +x 00:23:25.700 Malloc0 00:23:25.700 [2024-07-22 16:00:28.303964] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.700 16:00:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.700 16:00:28 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:23:25.700 16:00:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:25.700 16:00:28 -- common/autotest_common.sh@10 -- # set +x 00:23:25.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.700 16:00:28 -- target/host_management.sh@73 -- # perfpid=59949 00:23:25.700 16:00:28 -- target/host_management.sh@74 -- # waitforlisten 59949 /var/tmp/bdevperf.sock 00:23:25.700 16:00:28 -- common/autotest_common.sh@819 -- # '[' -z 59949 ']' 00:23:25.700 16:00:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.700 16:00:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:25.700 16:00:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.700 16:00:28 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:23:25.700 16:00:28 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:25.700 16:00:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:25.700 16:00:28 -- nvmf/common.sh@520 -- # config=() 00:23:25.700 16:00:28 -- common/autotest_common.sh@10 -- # set +x 00:23:25.700 16:00:28 -- nvmf/common.sh@520 -- # local subsystem config 00:23:25.700 16:00:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:25.700 16:00:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:25.700 { 00:23:25.700 "params": { 00:23:25.700 "name": "Nvme$subsystem", 00:23:25.700 "trtype": "$TEST_TRANSPORT", 00:23:25.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.700 "adrfam": "ipv4", 00:23:25.700 "trsvcid": "$NVMF_PORT", 00:23:25.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.700 "hdgst": ${hdgst:-false}, 00:23:25.700 "ddgst": ${ddgst:-false} 00:23:25.700 }, 00:23:25.700 "method": "bdev_nvme_attach_controller" 00:23:25.700 } 00:23:25.700 EOF 00:23:25.700 )") 00:23:25.700 16:00:28 -- nvmf/common.sh@542 -- # cat 00:23:25.700 16:00:28 -- nvmf/common.sh@544 -- # jq . 00:23:25.700 16:00:28 -- nvmf/common.sh@545 -- # IFS=, 00:23:25.700 16:00:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:25.700 "params": { 00:23:25.700 "name": "Nvme0", 00:23:25.700 "trtype": "tcp", 00:23:25.700 "traddr": "10.0.0.2", 00:23:25.700 "adrfam": "ipv4", 00:23:25.700 "trsvcid": "4420", 00:23:25.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:25.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:25.700 "hdgst": false, 00:23:25.700 "ddgst": false 00:23:25.700 }, 00:23:25.700 "method": "bdev_nvme_attach_controller" 00:23:25.700 }' 00:23:25.700 [2024-07-22 16:00:28.401978] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:25.700 [2024-07-22 16:00:28.402117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59949 ] 00:23:25.700 [2024-07-22 16:00:28.541393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.967 [2024-07-22 16:00:28.627508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.967 Running I/O for 10 seconds... 00:23:26.902 16:00:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:26.902 16:00:29 -- common/autotest_common.sh@852 -- # return 0 00:23:26.902 16:00:29 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:26.902 16:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.902 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:23:26.902 16:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.902 16:00:29 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.902 16:00:29 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:23:26.902 16:00:29 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:26.902 16:00:29 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:23:26.902 16:00:29 -- target/host_management.sh@52 -- # local ret=1 00:23:26.902 16:00:29 -- target/host_management.sh@53 -- # local i 00:23:26.902 16:00:29 -- target/host_management.sh@54 -- # (( i = 10 )) 00:23:26.902 16:00:29 -- target/host_management.sh@54 -- # (( i != 0 )) 00:23:26.902 16:00:29 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:23:26.902 16:00:29 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:23:26.902 16:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.902 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:23:26.902 16:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.902 16:00:29 -- target/host_management.sh@55 -- # read_io_count=1916 00:23:26.902 16:00:29 -- target/host_management.sh@58 -- # '[' 1916 -ge 100 ']' 00:23:26.902 16:00:29 -- target/host_management.sh@59 -- # ret=0 00:23:26.902 16:00:29 -- target/host_management.sh@60 -- # break 00:23:26.902 16:00:29 -- target/host_management.sh@64 -- # return 0 00:23:26.902 16:00:29 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:26.902 16:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.902 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:23:26.902 [2024-07-22 16:00:29.680129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.902 [2024-07-22 16:00:29.680280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.903 [2024-07-22 16:00:29.680288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.903 [2024-07-22 16:00:29.680296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.903 [2024-07-22 16:00:29.680306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.903 [2024-07-22 16:00:29.680315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.903 [2024-07-22 16:00:29.680323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.903 [2024-07-22 16:00:29.680332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b800 is same with the state(5) to be set 00:23:26.903 [2024-07-22 16:00:29.680466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.680969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.680987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.903 [2024-07-22 16:00:29.681683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.903 [2024-07-22 16:00:29.681701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.681716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.681734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.681749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.681766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.681782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.681799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.681814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.681833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.681849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.681867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.681883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.681901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.681917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.681935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.681950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.681968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.681983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 16:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.904 [2024-07-22 16:00:29.682018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 16:00:29 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:26.904 [2024-07-22 16:00:29.682314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 16:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:26.904 [2024-07-22 16:00:29.682642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.904 [2024-07-22 16:00:29.682773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.904 [2024-07-22 16:00:29.682790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ffba0 is same with the state(5) to be set 00:23:26.904 16:00:29 -- common/autotest_common.sh@10 -- # set +x 00:23:26.904 [2024-07-22 16:00:29.682859] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ffba0 was disconnected and freed. reset controller. 00:23:26.904 [2024-07-22 16:00:29.684407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:26.904 task offset: 1920 on job bdev=Nvme0n1 fails 00:23:26.904 00:23:26.904 Latency(us) 00:23:26.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.904 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.904 Job: Nvme0n1 ended in about 0.90 seconds with error 00:23:26.904 Verification LBA range: start 0x0 length 0x400 00:23:26.904 Nvme0n1 : 0.90 2260.56 141.28 70.78 0.00 27075.11 7000.44 29312.47 00:23:26.904 =================================================================================================================== 00:23:26.904 Total : 2260.56 141.28 70.78 0.00 27075.11 7000.44 29312.47 00:23:26.904 [2024-07-22 16:00:29.687109] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:26.904 [2024-07-22 16:00:29.687173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ff3d0 (9): Bad file descriptor 00:23:26.905 16:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:26.905 16:00:29 -- target/host_management.sh@87 -- # sleep 1 00:23:26.905 [2024-07-22 16:00:29.697328] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:27.845 16:00:30 -- target/host_management.sh@91 -- # kill -9 59949 00:23:27.845 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (59949) - No such process 00:23:27.845 16:00:30 -- target/host_management.sh@91 -- # true 00:23:27.845 16:00:30 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:23:27.845 16:00:30 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:23:27.845 16:00:30 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:27.845 16:00:30 -- nvmf/common.sh@520 -- # config=() 00:23:27.845 16:00:30 -- nvmf/common.sh@520 -- # local subsystem config 00:23:27.845 16:00:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:27.845 16:00:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:27.845 { 00:23:27.845 "params": { 00:23:27.845 "name": "Nvme$subsystem", 00:23:27.845 "trtype": "$TEST_TRANSPORT", 00:23:27.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.845 "adrfam": "ipv4", 00:23:27.845 "trsvcid": "$NVMF_PORT", 00:23:27.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.845 "hdgst": ${hdgst:-false}, 00:23:27.845 "ddgst": ${ddgst:-false} 00:23:27.845 }, 00:23:27.845 "method": "bdev_nvme_attach_controller" 00:23:27.845 } 00:23:27.845 EOF 00:23:27.845 )") 00:23:27.845 16:00:30 -- nvmf/common.sh@542 -- # cat 00:23:27.845 16:00:30 -- nvmf/common.sh@544 -- # jq . 00:23:27.845 16:00:30 -- nvmf/common.sh@545 -- # IFS=, 00:23:27.845 16:00:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:27.845 "params": { 00:23:27.845 "name": "Nvme0", 00:23:27.845 "trtype": "tcp", 00:23:27.845 "traddr": "10.0.0.2", 00:23:27.845 "adrfam": "ipv4", 00:23:27.845 "trsvcid": "4420", 00:23:27.845 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:27.845 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:27.845 "hdgst": false, 00:23:27.845 "ddgst": false 00:23:27.845 }, 00:23:27.845 "method": "bdev_nvme_attach_controller" 00:23:27.845 }' 00:23:28.103 [2024-07-22 16:00:30.755933] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:28.103 [2024-07-22 16:00:30.756029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59993 ] 00:23:28.103 [2024-07-22 16:00:30.905131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.361 [2024-07-22 16:00:30.983835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.361 Running I/O for 1 seconds... 00:23:29.295 00:23:29.295 Latency(us) 00:23:29.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.295 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.295 Verification LBA range: start 0x0 length 0x400 00:23:29.295 Nvme0n1 : 1.01 2589.90 161.87 0.00 0.00 24328.19 3157.64 27763.43 00:23:29.295 =================================================================================================================== 00:23:29.295 Total : 2589.90 161.87 0.00 0.00 24328.19 3157.64 27763.43 00:23:29.552 16:00:32 -- target/host_management.sh@101 -- # stoptarget 00:23:29.552 16:00:32 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:23:29.552 16:00:32 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:23:29.552 16:00:32 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:23:29.552 16:00:32 -- target/host_management.sh@40 -- # nvmftestfini 00:23:29.552 16:00:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:29.552 16:00:32 -- nvmf/common.sh@116 -- # sync 00:23:29.809 16:00:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:29.809 16:00:32 -- nvmf/common.sh@119 -- # set +e 00:23:29.809 16:00:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:29.810 16:00:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:29.810 rmmod nvme_tcp 00:23:29.810 rmmod nvme_fabrics 00:23:29.810 rmmod nvme_keyring 00:23:29.810 16:00:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:29.810 16:00:32 -- nvmf/common.sh@123 -- # set -e 00:23:29.810 16:00:32 -- nvmf/common.sh@124 -- # return 0 00:23:29.810 16:00:32 -- nvmf/common.sh@477 -- # '[' -n 59890 ']' 00:23:29.810 16:00:32 -- nvmf/common.sh@478 -- # killprocess 59890 00:23:29.810 16:00:32 -- common/autotest_common.sh@926 -- # '[' -z 59890 ']' 00:23:29.810 16:00:32 -- common/autotest_common.sh@930 -- # kill -0 59890 00:23:29.810 16:00:32 -- common/autotest_common.sh@931 -- # uname 00:23:29.810 16:00:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:29.810 16:00:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59890 00:23:29.810 16:00:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:29.810 killing process with pid 59890 00:23:29.810 16:00:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:29.810 16:00:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59890' 00:23:29.810 16:00:32 -- common/autotest_common.sh@945 -- # kill 59890 00:23:29.810 16:00:32 -- common/autotest_common.sh@950 -- # wait 59890 00:23:30.072 [2024-07-22 16:00:32.690368] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:23:30.072 16:00:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:30.072 16:00:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:30.072 16:00:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:30.072 16:00:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.072 16:00:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:30.072 16:00:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.072 16:00:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.072 16:00:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.072 16:00:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:30.072 00:23:30.072 real 0m5.695s 00:23:30.072 user 0m24.149s 00:23:30.072 sys 0m1.313s 00:23:30.072 16:00:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.072 16:00:32 -- common/autotest_common.sh@10 -- # set +x 00:23:30.072 ************************************ 00:23:30.072 END TEST nvmf_host_management 00:23:30.072 ************************************ 00:23:30.072 16:00:32 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:30.072 ************************************ 00:23:30.072 END TEST nvmf_host_management 00:23:30.072 ************************************ 00:23:30.072 00:23:30.072 real 0m6.241s 00:23:30.072 user 0m24.274s 00:23:30.072 sys 0m1.527s 00:23:30.072 16:00:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.072 16:00:32 -- common/autotest_common.sh@10 -- # set +x 00:23:30.072 16:00:32 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:23:30.072 16:00:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:30.072 16:00:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:30.072 16:00:32 -- common/autotest_common.sh@10 -- # set +x 00:23:30.072 ************************************ 00:23:30.072 START TEST nvmf_lvol 00:23:30.072 ************************************ 00:23:30.072 16:00:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:23:30.072 * Looking for test storage... 00:23:30.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:30.072 16:00:32 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:30.072 16:00:32 -- nvmf/common.sh@7 -- # uname -s 00:23:30.072 16:00:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.072 16:00:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.072 16:00:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.072 16:00:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.072 16:00:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.072 16:00:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.072 16:00:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.072 16:00:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.072 16:00:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.072 16:00:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.072 16:00:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:23:30.072 16:00:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:23:30.072 16:00:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.072 16:00:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.072 16:00:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:30.072 16:00:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.072 16:00:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.072 16:00:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.072 16:00:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.072 16:00:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.072 16:00:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.072 16:00:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.072 16:00:32 -- paths/export.sh@5 -- # export PATH 00:23:30.072 16:00:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.072 16:00:32 -- nvmf/common.sh@46 -- # : 0 00:23:30.072 16:00:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:30.072 16:00:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:30.072 16:00:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:30.072 16:00:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.072 16:00:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.072 16:00:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:30.072 16:00:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:30.072 16:00:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:30.072 16:00:32 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:30.072 16:00:32 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:30.072 16:00:32 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:23:30.072 16:00:32 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:23:30.072 16:00:32 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:30.072 16:00:32 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:23:30.072 16:00:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:30.072 16:00:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.072 16:00:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:30.072 16:00:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:30.072 16:00:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:30.072 16:00:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.072 16:00:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.072 16:00:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.072 16:00:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:30.072 16:00:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:30.072 16:00:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:30.072 16:00:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:30.072 16:00:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:30.072 16:00:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:30.072 16:00:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.072 16:00:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.072 16:00:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:30.072 16:00:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:30.072 16:00:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:30.072 16:00:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:30.072 16:00:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:30.072 16:00:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.072 16:00:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:30.072 16:00:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:30.072 16:00:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:30.072 16:00:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:30.072 16:00:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:30.328 16:00:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:30.328 Cannot find device "nvmf_tgt_br" 00:23:30.328 16:00:32 -- nvmf/common.sh@154 -- # true 00:23:30.328 16:00:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.328 Cannot find device "nvmf_tgt_br2" 00:23:30.328 16:00:32 -- nvmf/common.sh@155 -- # true 00:23:30.328 16:00:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:30.328 16:00:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:30.328 Cannot find device "nvmf_tgt_br" 00:23:30.328 16:00:32 -- nvmf/common.sh@157 -- # true 00:23:30.328 16:00:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:30.328 Cannot find device "nvmf_tgt_br2" 00:23:30.328 16:00:32 -- nvmf/common.sh@158 -- # true 00:23:30.328 16:00:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:30.328 16:00:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:30.328 16:00:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.328 16:00:33 -- nvmf/common.sh@161 -- # true 00:23:30.328 16:00:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.328 16:00:33 -- nvmf/common.sh@162 -- # true 00:23:30.328 16:00:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:30.328 16:00:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:30.328 16:00:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:30.328 16:00:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:30.328 16:00:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:30.328 16:00:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:30.328 16:00:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:30.328 16:00:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:30.328 16:00:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:30.328 16:00:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:30.328 16:00:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:30.328 16:00:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:30.328 16:00:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:30.328 16:00:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:30.328 16:00:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:30.328 16:00:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:30.328 16:00:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:30.328 16:00:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:30.328 16:00:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:30.585 16:00:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:30.585 16:00:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:30.585 16:00:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:30.585 16:00:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:30.585 16:00:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:30.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:23:30.585 00:23:30.585 --- 10.0.0.2 ping statistics --- 00:23:30.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.585 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:30.585 16:00:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:30.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:30.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:23:30.585 00:23:30.585 --- 10.0.0.3 ping statistics --- 00:23:30.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.585 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:30.585 16:00:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:30.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:23:30.585 00:23:30.585 --- 10.0.0.1 ping statistics --- 00:23:30.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.585 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:30.585 16:00:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.585 16:00:33 -- nvmf/common.sh@421 -- # return 0 00:23:30.585 16:00:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:30.585 16:00:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.585 16:00:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:30.585 16:00:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:30.585 16:00:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.585 16:00:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:30.585 16:00:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:30.585 16:00:33 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:23:30.585 16:00:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:30.585 16:00:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:30.585 16:00:33 -- common/autotest_common.sh@10 -- # set +x 00:23:30.585 16:00:33 -- nvmf/common.sh@469 -- # nvmfpid=60215 00:23:30.585 16:00:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:23:30.585 16:00:33 -- nvmf/common.sh@470 -- # waitforlisten 60215 00:23:30.585 16:00:33 -- common/autotest_common.sh@819 -- # '[' -z 60215 ']' 00:23:30.585 16:00:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.585 16:00:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:30.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.585 16:00:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.585 16:00:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:30.585 16:00:33 -- common/autotest_common.sh@10 -- # set +x 00:23:30.585 [2024-07-22 16:00:33.358282] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:30.585 [2024-07-22 16:00:33.358422] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.843 [2024-07-22 16:00:33.507158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:30.843 [2024-07-22 16:00:33.594475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:30.843 [2024-07-22 16:00:33.594893] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.843 [2024-07-22 16:00:33.595100] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.843 [2024-07-22 16:00:33.595382] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.843 [2024-07-22 16:00:33.595671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.843 [2024-07-22 16:00:33.595775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.843 [2024-07-22 16:00:33.595794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.776 16:00:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:31.776 16:00:34 -- common/autotest_common.sh@852 -- # return 0 00:23:31.777 16:00:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:31.777 16:00:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:31.777 16:00:34 -- common/autotest_common.sh@10 -- # set +x 00:23:31.777 16:00:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.777 16:00:34 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:31.777 [2024-07-22 16:00:34.604971] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.777 16:00:34 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:32.392 16:00:34 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:23:32.392 16:00:34 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:32.392 16:00:35 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:23:32.392 16:00:35 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:23:32.649 16:00:35 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:23:33.215 16:00:35 -- target/nvmf_lvol.sh@29 -- # lvs=590202aa-af35-41ae-a144-a3bef5c7e880 00:23:33.215 16:00:35 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 590202aa-af35-41ae-a144-a3bef5c7e880 lvol 20 00:23:33.472 16:00:36 -- target/nvmf_lvol.sh@32 -- # lvol=8490c0fb-0850-4b5c-a90a-33beb2dde2de 00:23:33.472 16:00:36 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:33.730 16:00:36 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8490c0fb-0850-4b5c-a90a-33beb2dde2de 00:23:33.988 16:00:36 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.245 [2024-07-22 16:00:36.974314] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.245 16:00:36 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:34.502 16:00:37 -- target/nvmf_lvol.sh@42 -- # perf_pid=60296 00:23:34.502 16:00:37 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:23:34.502 16:00:37 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:23:35.882 16:00:38 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8490c0fb-0850-4b5c-a90a-33beb2dde2de MY_SNAPSHOT 00:23:35.882 16:00:38 -- target/nvmf_lvol.sh@47 -- # snapshot=bee9d195-71e7-45c5-be97-463c505e237e 00:23:35.882 16:00:38 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8490c0fb-0850-4b5c-a90a-33beb2dde2de 30 00:23:36.447 16:00:39 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone bee9d195-71e7-45c5-be97-463c505e237e MY_CLONE 00:23:36.705 16:00:39 -- target/nvmf_lvol.sh@49 -- # clone=c5ff358c-7f38-4ce0-95bf-cbfb438bc2b2 00:23:36.705 16:00:39 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c5ff358c-7f38-4ce0-95bf-cbfb438bc2b2 00:23:37.271 16:00:40 -- target/nvmf_lvol.sh@53 -- # wait 60296 00:23:45.400 Initializing NVMe Controllers 00:23:45.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:23:45.400 Controller IO queue size 128, less than required. 00:23:45.400 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:45.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:23:45.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:23:45.400 Initialization complete. Launching workers. 00:23:45.400 ======================================================== 00:23:45.400 Latency(us) 00:23:45.400 Device Information : IOPS MiB/s Average min max 00:23:45.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8991.70 35.12 14235.79 249.78 69185.14 00:23:45.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8560.60 33.44 14958.90 3137.47 82462.09 00:23:45.400 ======================================================== 00:23:45.400 Total : 17552.29 68.56 14588.46 249.78 82462.09 00:23:45.400 00:23:45.400 16:00:47 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:45.400 16:00:47 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8490c0fb-0850-4b5c-a90a-33beb2dde2de 00:23:45.400 16:00:48 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 590202aa-af35-41ae-a144-a3bef5c7e880 00:23:45.662 16:00:48 -- target/nvmf_lvol.sh@60 -- # rm -f 00:23:45.662 16:00:48 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:23:45.662 16:00:48 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:23:45.662 16:00:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:45.662 16:00:48 -- nvmf/common.sh@116 -- # sync 00:23:45.662 16:00:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:45.662 16:00:48 -- nvmf/common.sh@119 -- # set +e 00:23:45.662 16:00:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:45.662 16:00:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:45.662 rmmod nvme_tcp 00:23:45.921 rmmod nvme_fabrics 00:23:45.921 rmmod nvme_keyring 00:23:45.921 16:00:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:45.921 16:00:48 -- nvmf/common.sh@123 -- # set -e 00:23:45.921 16:00:48 -- nvmf/common.sh@124 -- # return 0 00:23:45.921 16:00:48 -- nvmf/common.sh@477 -- # '[' -n 60215 ']' 00:23:45.921 16:00:48 -- nvmf/common.sh@478 -- # killprocess 60215 00:23:45.921 16:00:48 -- common/autotest_common.sh@926 -- # '[' -z 60215 ']' 00:23:45.921 16:00:48 -- common/autotest_common.sh@930 -- # kill -0 60215 00:23:45.921 16:00:48 -- common/autotest_common.sh@931 -- # uname 00:23:45.921 16:00:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:45.921 16:00:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60215 00:23:45.921 killing process with pid 60215 00:23:45.921 16:00:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:45.921 16:00:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:45.921 16:00:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60215' 00:23:45.921 16:00:48 -- common/autotest_common.sh@945 -- # kill 60215 00:23:45.921 16:00:48 -- common/autotest_common.sh@950 -- # wait 60215 00:23:46.179 16:00:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:46.179 16:00:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:46.179 16:00:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:46.179 16:00:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.179 16:00:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:46.179 16:00:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.180 16:00:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.180 16:00:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.180 16:00:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:46.180 ************************************ 00:23:46.180 END TEST nvmf_lvol 00:23:46.180 ************************************ 00:23:46.180 00:23:46.180 real 0m16.057s 00:23:46.180 user 1m5.798s 00:23:46.180 sys 0m5.313s 00:23:46.180 16:00:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:46.180 16:00:48 -- common/autotest_common.sh@10 -- # set +x 00:23:46.180 16:00:48 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:46.180 16:00:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:46.180 16:00:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:46.180 16:00:48 -- common/autotest_common.sh@10 -- # set +x 00:23:46.180 ************************************ 00:23:46.180 START TEST nvmf_lvs_grow 00:23:46.180 ************************************ 00:23:46.180 16:00:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:46.180 * Looking for test storage... 00:23:46.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:46.180 16:00:48 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:46.180 16:00:49 -- nvmf/common.sh@7 -- # uname -s 00:23:46.180 16:00:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.180 16:00:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.180 16:00:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.180 16:00:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.180 16:00:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.180 16:00:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.180 16:00:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.180 16:00:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.180 16:00:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.180 16:00:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.180 16:00:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:23:46.180 16:00:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:23:46.180 16:00:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.180 16:00:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.180 16:00:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:46.180 16:00:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:46.180 16:00:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.180 16:00:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.180 16:00:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.180 16:00:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.180 16:00:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.180 16:00:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.180 16:00:49 -- paths/export.sh@5 -- # export PATH 00:23:46.180 16:00:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.180 16:00:49 -- nvmf/common.sh@46 -- # : 0 00:23:46.180 16:00:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:46.180 16:00:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:46.180 16:00:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:46.180 16:00:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.180 16:00:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.180 16:00:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:46.180 16:00:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:46.180 16:00:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:46.180 16:00:49 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:46.180 16:00:49 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:46.180 16:00:49 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:23:46.180 16:00:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:46.180 16:00:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.180 16:00:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:46.180 16:00:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:46.180 16:00:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:46.180 16:00:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.180 16:00:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.180 16:00:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.180 16:00:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:46.180 16:00:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:46.180 16:00:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:46.180 16:00:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:46.180 16:00:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:46.180 16:00:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:46.180 16:00:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.180 16:00:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.180 16:00:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:46.180 16:00:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:46.180 16:00:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:46.180 16:00:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:46.180 16:00:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:46.180 16:00:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.180 16:00:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:46.180 16:00:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:46.180 16:00:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:46.180 16:00:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:46.180 16:00:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:46.439 16:00:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:46.439 Cannot find device "nvmf_tgt_br" 00:23:46.439 16:00:49 -- nvmf/common.sh@154 -- # true 00:23:46.439 16:00:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:46.439 Cannot find device "nvmf_tgt_br2" 00:23:46.439 16:00:49 -- nvmf/common.sh@155 -- # true 00:23:46.439 16:00:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:46.439 16:00:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:46.439 Cannot find device "nvmf_tgt_br" 00:23:46.439 16:00:49 -- nvmf/common.sh@157 -- # true 00:23:46.439 16:00:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:46.439 Cannot find device "nvmf_tgt_br2" 00:23:46.439 16:00:49 -- nvmf/common.sh@158 -- # true 00:23:46.439 16:00:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:46.439 16:00:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:46.439 16:00:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:46.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:46.439 16:00:49 -- nvmf/common.sh@161 -- # true 00:23:46.439 16:00:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:46.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:46.439 16:00:49 -- nvmf/common.sh@162 -- # true 00:23:46.439 16:00:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:46.439 16:00:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:46.439 16:00:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:46.439 16:00:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:46.439 16:00:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:46.439 16:00:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:46.439 16:00:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:46.439 16:00:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:46.439 16:00:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:46.439 16:00:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:46.439 16:00:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:46.439 16:00:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:46.439 16:00:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:46.439 16:00:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:46.439 16:00:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:46.439 16:00:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:46.439 16:00:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:46.439 16:00:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:46.439 16:00:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:46.697 16:00:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:46.697 16:00:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:46.697 16:00:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:46.697 16:00:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:46.697 16:00:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:46.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:23:46.697 00:23:46.697 --- 10.0.0.2 ping statistics --- 00:23:46.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.697 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:23:46.697 16:00:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:46.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:46.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:23:46.697 00:23:46.697 --- 10.0.0.3 ping statistics --- 00:23:46.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.697 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:46.697 16:00:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:46.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:23:46.697 00:23:46.697 --- 10.0.0.1 ping statistics --- 00:23:46.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.697 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:46.697 16:00:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.698 16:00:49 -- nvmf/common.sh@421 -- # return 0 00:23:46.698 16:00:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:46.698 16:00:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.698 16:00:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:46.698 16:00:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:46.698 16:00:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.698 16:00:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:46.698 16:00:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:46.698 16:00:49 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:23:46.698 16:00:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:46.698 16:00:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:46.698 16:00:49 -- common/autotest_common.sh@10 -- # set +x 00:23:46.698 16:00:49 -- nvmf/common.sh@469 -- # nvmfpid=60623 00:23:46.698 16:00:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:46.698 16:00:49 -- nvmf/common.sh@470 -- # waitforlisten 60623 00:23:46.698 16:00:49 -- common/autotest_common.sh@819 -- # '[' -z 60623 ']' 00:23:46.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.698 16:00:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.698 16:00:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:46.698 16:00:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.698 16:00:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:46.698 16:00:49 -- common/autotest_common.sh@10 -- # set +x 00:23:46.698 [2024-07-22 16:00:49.454831] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:46.698 [2024-07-22 16:00:49.454918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.956 [2024-07-22 16:00:49.591829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.956 [2024-07-22 16:00:49.659509] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:46.956 [2024-07-22 16:00:49.659687] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.956 [2024-07-22 16:00:49.659706] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.956 [2024-07-22 16:00:49.659717] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.956 [2024-07-22 16:00:49.659748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.890 16:00:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:47.890 16:00:50 -- common/autotest_common.sh@852 -- # return 0 00:23:47.890 16:00:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:47.890 16:00:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:47.890 16:00:50 -- common/autotest_common.sh@10 -- # set +x 00:23:47.890 16:00:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.890 16:00:50 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:47.890 [2024-07-22 16:00:50.748619] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:23:48.148 16:00:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:48.148 16:00:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:48.148 16:00:50 -- common/autotest_common.sh@10 -- # set +x 00:23:48.148 ************************************ 00:23:48.148 START TEST lvs_grow_clean 00:23:48.148 ************************************ 00:23:48.148 16:00:50 -- common/autotest_common.sh@1104 -- # lvs_grow 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:48.148 16:00:50 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:48.404 16:00:51 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:48.404 16:00:51 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:48.662 16:00:51 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:23:48.662 16:00:51 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:23:48.662 16:00:51 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:48.920 16:00:51 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:48.920 16:00:51 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:48.920 16:00:51 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c1d0fca6-6dbd-447c-8047-f0129dffd14b lvol 150 00:23:49.178 16:00:51 -- target/nvmf_lvs_grow.sh@33 -- # lvol=768d53e0-13d8-4163-825d-bdbc3a3b63df 00:23:49.178 16:00:51 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:49.179 16:00:51 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:49.436 [2024-07-22 16:00:52.138585] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:49.436 [2024-07-22 16:00:52.138702] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:49.436 true 00:23:49.436 16:00:52 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:23:49.436 16:00:52 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:49.693 16:00:52 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:49.693 16:00:52 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:49.951 16:00:52 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 768d53e0-13d8-4163-825d-bdbc3a3b63df 00:23:50.214 16:00:52 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:50.473 [2024-07-22 16:00:53.267350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.473 16:00:53 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:50.729 16:00:53 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60711 00:23:50.729 16:00:53 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:50.729 16:00:53 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.729 16:00:53 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60711 /var/tmp/bdevperf.sock 00:23:50.729 16:00:53 -- common/autotest_common.sh@819 -- # '[' -z 60711 ']' 00:23:50.729 16:00:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.729 16:00:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:50.730 16:00:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.730 16:00:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:50.730 16:00:53 -- common/autotest_common.sh@10 -- # set +x 00:23:50.730 [2024-07-22 16:00:53.583372] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:50.730 [2024-07-22 16:00:53.583459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60711 ] 00:23:50.987 [2024-07-22 16:00:53.717034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.987 [2024-07-22 16:00:53.801697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.920 16:00:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:51.920 16:00:54 -- common/autotest_common.sh@852 -- # return 0 00:23:51.920 16:00:54 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:52.484 Nvme0n1 00:23:52.484 16:00:55 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:52.741 [ 00:23:52.741 { 00:23:52.741 "name": "Nvme0n1", 00:23:52.741 "aliases": [ 00:23:52.741 "768d53e0-13d8-4163-825d-bdbc3a3b63df" 00:23:52.741 ], 00:23:52.741 "product_name": "NVMe disk", 00:23:52.741 "block_size": 4096, 00:23:52.741 "num_blocks": 38912, 00:23:52.741 "uuid": "768d53e0-13d8-4163-825d-bdbc3a3b63df", 00:23:52.741 "assigned_rate_limits": { 00:23:52.741 "rw_ios_per_sec": 0, 00:23:52.741 "rw_mbytes_per_sec": 0, 00:23:52.741 "r_mbytes_per_sec": 0, 00:23:52.741 "w_mbytes_per_sec": 0 00:23:52.741 }, 00:23:52.741 "claimed": false, 00:23:52.741 "zoned": false, 00:23:52.741 "supported_io_types": { 00:23:52.741 "read": true, 00:23:52.741 "write": true, 00:23:52.741 "unmap": true, 00:23:52.741 "write_zeroes": true, 00:23:52.741 "flush": true, 00:23:52.741 "reset": true, 00:23:52.741 "compare": true, 00:23:52.741 "compare_and_write": true, 00:23:52.741 "abort": true, 00:23:52.741 "nvme_admin": true, 00:23:52.741 "nvme_io": true 00:23:52.741 }, 00:23:52.741 "driver_specific": { 00:23:52.741 "nvme": [ 00:23:52.741 { 00:23:52.741 "trid": { 00:23:52.741 "trtype": "TCP", 00:23:52.741 "adrfam": "IPv4", 00:23:52.741 "traddr": "10.0.0.2", 00:23:52.741 "trsvcid": "4420", 00:23:52.741 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:52.741 }, 00:23:52.741 "ctrlr_data": { 00:23:52.741 "cntlid": 1, 00:23:52.741 "vendor_id": "0x8086", 00:23:52.741 "model_number": "SPDK bdev Controller", 00:23:52.741 "serial_number": "SPDK0", 00:23:52.741 "firmware_revision": "24.01.1", 00:23:52.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.741 "oacs": { 00:23:52.741 "security": 0, 00:23:52.741 "format": 0, 00:23:52.741 "firmware": 0, 00:23:52.741 "ns_manage": 0 00:23:52.741 }, 00:23:52.741 "multi_ctrlr": true, 00:23:52.741 "ana_reporting": false 00:23:52.741 }, 00:23:52.741 "vs": { 00:23:52.741 "nvme_version": "1.3" 00:23:52.741 }, 00:23:52.741 "ns_data": { 00:23:52.741 "id": 1, 00:23:52.741 "can_share": true 00:23:52.741 } 00:23:52.741 } 00:23:52.741 ], 00:23:52.741 "mp_policy": "active_passive" 00:23:52.742 } 00:23:52.742 } 00:23:52.742 ] 00:23:52.742 16:00:55 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60740 00:23:52.742 16:00:55 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.742 16:00:55 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:52.999 Running I/O for 10 seconds... 00:23:53.931 Latency(us) 00:23:53.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:53.931 Nvme0n1 : 1.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:23:53.931 =================================================================================================================== 00:23:53.931 Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:23:53.931 00:23:54.863 16:00:57 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:23:54.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:54.863 Nvme0n1 : 2.00 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:23:54.863 =================================================================================================================== 00:23:54.863 Total : 6096.00 23.81 0.00 0.00 0.00 0.00 0.00 00:23:54.863 00:23:55.120 true 00:23:55.120 16:00:57 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:55.120 16:00:57 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:23:55.377 16:00:58 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:55.377 16:00:58 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:55.378 16:00:58 -- target/nvmf_lvs_grow.sh@65 -- # wait 60740 00:23:55.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:55.942 Nvme0n1 : 3.00 6307.67 24.64 0.00 0.00 0.00 0.00 0.00 00:23:55.942 =================================================================================================================== 00:23:55.942 Total : 6307.67 24.64 0.00 0.00 0.00 0.00 0.00 00:23:55.942 00:23:56.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:56.877 Nvme0n1 : 4.00 6257.75 24.44 0.00 0.00 0.00 0.00 0.00 00:23:56.877 =================================================================================================================== 00:23:56.877 Total : 6257.75 24.44 0.00 0.00 0.00 0.00 0.00 00:23:56.877 00:23:57.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:57.811 Nvme0n1 : 5.00 6377.80 24.91 0.00 0.00 0.00 0.00 0.00 00:23:57.811 =================================================================================================================== 00:23:57.811 Total : 6377.80 24.91 0.00 0.00 0.00 0.00 0.00 00:23:57.811 00:23:59.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:59.198 Nvme0n1 : 6.00 6394.33 24.98 0.00 0.00 0.00 0.00 0.00 00:23:59.198 =================================================================================================================== 00:23:59.198 Total : 6394.33 24.98 0.00 0.00 0.00 0.00 0.00 00:23:59.198 00:24:00.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:00.132 Nvme0n1 : 7.00 6424.29 25.09 0.00 0.00 0.00 0.00 0.00 00:24:00.132 =================================================================================================================== 00:24:00.132 Total : 6424.29 25.09 0.00 0.00 0.00 0.00 0.00 00:24:00.132 00:24:01.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:01.065 Nvme0n1 : 8.00 6430.88 25.12 0.00 0.00 0.00 0.00 0.00 00:24:01.065 =================================================================================================================== 00:24:01.065 Total : 6430.88 25.12 0.00 0.00 0.00 0.00 0.00 00:24:01.065 00:24:01.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:01.999 Nvme0n1 : 9.00 6393.67 24.98 0.00 0.00 0.00 0.00 0.00 00:24:01.999 =================================================================================================================== 00:24:01.999 Total : 6393.67 24.98 0.00 0.00 0.00 0.00 0.00 00:24:01.999 00:24:02.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:02.930 Nvme0n1 : 10.00 6414.70 25.06 0.00 0.00 0.00 0.00 0.00 00:24:02.930 =================================================================================================================== 00:24:02.930 Total : 6414.70 25.06 0.00 0.00 0.00 0.00 0.00 00:24:02.930 00:24:02.930 00:24:02.930 Latency(us) 00:24:02.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:02.930 Nvme0n1 : 10.02 6416.71 25.07 0.00 0.00 19943.75 6494.02 165865.66 00:24:02.930 =================================================================================================================== 00:24:02.930 Total : 6416.71 25.07 0.00 0.00 19943.75 6494.02 165865.66 00:24:02.930 0 00:24:02.930 16:01:05 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60711 00:24:02.930 16:01:05 -- common/autotest_common.sh@926 -- # '[' -z 60711 ']' 00:24:02.930 16:01:05 -- common/autotest_common.sh@930 -- # kill -0 60711 00:24:02.930 16:01:05 -- common/autotest_common.sh@931 -- # uname 00:24:02.930 16:01:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:02.930 16:01:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60711 00:24:02.930 16:01:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:02.930 16:01:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:02.930 16:01:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60711' 00:24:02.930 killing process with pid 60711 00:24:02.930 Received shutdown signal, test time was about 10.000000 seconds 00:24:02.930 00:24:02.930 Latency(us) 00:24:02.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.930 =================================================================================================================== 00:24:02.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.930 16:01:05 -- common/autotest_common.sh@945 -- # kill 60711 00:24:02.930 16:01:05 -- common/autotest_common.sh@950 -- # wait 60711 00:24:03.189 16:01:05 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:03.446 16:01:06 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:24:03.446 16:01:06 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:24:03.702 16:01:06 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:24:03.702 16:01:06 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:24:03.702 16:01:06 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:03.960 [2024-07-22 16:01:06.752147] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:03.960 16:01:06 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:24:03.960 16:01:06 -- common/autotest_common.sh@640 -- # local es=0 00:24:03.960 16:01:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:24:03.960 16:01:06 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:03.960 16:01:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:03.960 16:01:06 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:03.960 16:01:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:03.960 16:01:06 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:03.960 16:01:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:03.960 16:01:06 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:03.960 16:01:06 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:03.960 16:01:06 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:24:04.218 request: 00:24:04.218 { 00:24:04.218 "uuid": "c1d0fca6-6dbd-447c-8047-f0129dffd14b", 00:24:04.218 "method": "bdev_lvol_get_lvstores", 00:24:04.218 "req_id": 1 00:24:04.218 } 00:24:04.218 Got JSON-RPC error response 00:24:04.218 response: 00:24:04.218 { 00:24:04.218 "code": -19, 00:24:04.218 "message": "No such device" 00:24:04.218 } 00:24:04.218 16:01:07 -- common/autotest_common.sh@643 -- # es=1 00:24:04.218 16:01:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:04.218 16:01:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:04.218 16:01:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:04.218 16:01:07 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:04.782 aio_bdev 00:24:04.782 16:01:07 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 768d53e0-13d8-4163-825d-bdbc3a3b63df 00:24:04.782 16:01:07 -- common/autotest_common.sh@887 -- # local bdev_name=768d53e0-13d8-4163-825d-bdbc3a3b63df 00:24:04.782 16:01:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:04.782 16:01:07 -- common/autotest_common.sh@889 -- # local i 00:24:04.782 16:01:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:04.782 16:01:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:04.782 16:01:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:05.077 16:01:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 768d53e0-13d8-4163-825d-bdbc3a3b63df -t 2000 00:24:05.344 [ 00:24:05.344 { 00:24:05.344 "name": "768d53e0-13d8-4163-825d-bdbc3a3b63df", 00:24:05.344 "aliases": [ 00:24:05.344 "lvs/lvol" 00:24:05.344 ], 00:24:05.344 "product_name": "Logical Volume", 00:24:05.344 "block_size": 4096, 00:24:05.344 "num_blocks": 38912, 00:24:05.344 "uuid": "768d53e0-13d8-4163-825d-bdbc3a3b63df", 00:24:05.344 "assigned_rate_limits": { 00:24:05.344 "rw_ios_per_sec": 0, 00:24:05.344 "rw_mbytes_per_sec": 0, 00:24:05.344 "r_mbytes_per_sec": 0, 00:24:05.344 "w_mbytes_per_sec": 0 00:24:05.344 }, 00:24:05.344 "claimed": false, 00:24:05.344 "zoned": false, 00:24:05.344 "supported_io_types": { 00:24:05.344 "read": true, 00:24:05.344 "write": true, 00:24:05.344 "unmap": true, 00:24:05.344 "write_zeroes": true, 00:24:05.344 "flush": false, 00:24:05.344 "reset": true, 00:24:05.344 "compare": false, 00:24:05.344 "compare_and_write": false, 00:24:05.344 "abort": false, 00:24:05.344 "nvme_admin": false, 00:24:05.344 "nvme_io": false 00:24:05.345 }, 00:24:05.345 "driver_specific": { 00:24:05.345 "lvol": { 00:24:05.345 "lvol_store_uuid": "c1d0fca6-6dbd-447c-8047-f0129dffd14b", 00:24:05.345 "base_bdev": "aio_bdev", 00:24:05.345 "thin_provision": false, 00:24:05.345 "snapshot": false, 00:24:05.345 "clone": false, 00:24:05.345 "esnap_clone": false 00:24:05.345 } 00:24:05.345 } 00:24:05.345 } 00:24:05.345 ] 00:24:05.345 16:01:07 -- common/autotest_common.sh@895 -- # return 0 00:24:05.345 16:01:08 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:24:05.345 16:01:08 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:24:05.603 16:01:08 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:24:05.603 16:01:08 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:24:05.603 16:01:08 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:24:05.888 16:01:08 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:24:05.888 16:01:08 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 768d53e0-13d8-4163-825d-bdbc3a3b63df 00:24:06.146 16:01:08 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1d0fca6-6dbd-447c-8047-f0129dffd14b 00:24:06.404 16:01:09 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:06.668 16:01:09 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:06.928 00:24:06.928 real 0m18.978s 00:24:06.928 user 0m18.333s 00:24:06.928 sys 0m2.584s 00:24:06.928 16:01:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:06.928 16:01:09 -- common/autotest_common.sh@10 -- # set +x 00:24:06.928 ************************************ 00:24:06.928 END TEST lvs_grow_clean 00:24:06.928 ************************************ 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:24:07.186 16:01:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:07.186 16:01:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:07.186 16:01:09 -- common/autotest_common.sh@10 -- # set +x 00:24:07.186 ************************************ 00:24:07.186 START TEST lvs_grow_dirty 00:24:07.186 ************************************ 00:24:07.186 16:01:09 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:07.186 16:01:09 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:07.444 16:01:10 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:07.444 16:01:10 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:07.701 16:01:10 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:07.701 16:01:10 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:07.701 16:01:10 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:07.959 16:01:10 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:07.959 16:01:10 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:07.959 16:01:10 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e lvol 150 00:24:08.217 16:01:10 -- target/nvmf_lvs_grow.sh@33 -- # lvol=eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1 00:24:08.217 16:01:10 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:08.217 16:01:10 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:08.510 [2024-07-22 16:01:11.267678] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:08.510 [2024-07-22 16:01:11.268471] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:08.510 true 00:24:08.510 16:01:11 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:08.510 16:01:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:08.768 16:01:11 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:08.768 16:01:11 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:09.027 16:01:11 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1 00:24:09.285 16:01:12 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:09.543 16:01:12 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:09.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.800 16:01:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60989 00:24:09.800 16:01:12 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:09.800 16:01:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.800 16:01:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60989 /var/tmp/bdevperf.sock 00:24:09.800 16:01:12 -- common/autotest_common.sh@819 -- # '[' -z 60989 ']' 00:24:09.800 16:01:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.800 16:01:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:09.800 16:01:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.800 16:01:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:09.800 16:01:12 -- common/autotest_common.sh@10 -- # set +x 00:24:09.800 [2024-07-22 16:01:12.627579] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:09.800 [2024-07-22 16:01:12.627670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60989 ] 00:24:10.059 [2024-07-22 16:01:12.763424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.059 [2024-07-22 16:01:12.849727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.316 16:01:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:10.316 16:01:12 -- common/autotest_common.sh@852 -- # return 0 00:24:10.316 16:01:12 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:10.574 Nvme0n1 00:24:10.574 16:01:13 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:10.831 [ 00:24:10.831 { 00:24:10.831 "name": "Nvme0n1", 00:24:10.831 "aliases": [ 00:24:10.831 "eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1" 00:24:10.831 ], 00:24:10.831 "product_name": "NVMe disk", 00:24:10.831 "block_size": 4096, 00:24:10.831 "num_blocks": 38912, 00:24:10.831 "uuid": "eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1", 00:24:10.831 "assigned_rate_limits": { 00:24:10.831 "rw_ios_per_sec": 0, 00:24:10.831 "rw_mbytes_per_sec": 0, 00:24:10.831 "r_mbytes_per_sec": 0, 00:24:10.831 "w_mbytes_per_sec": 0 00:24:10.831 }, 00:24:10.831 "claimed": false, 00:24:10.831 "zoned": false, 00:24:10.831 "supported_io_types": { 00:24:10.831 "read": true, 00:24:10.831 "write": true, 00:24:10.831 "unmap": true, 00:24:10.831 "write_zeroes": true, 00:24:10.831 "flush": true, 00:24:10.831 "reset": true, 00:24:10.831 "compare": true, 00:24:10.832 "compare_and_write": true, 00:24:10.832 "abort": true, 00:24:10.832 "nvme_admin": true, 00:24:10.832 "nvme_io": true 00:24:10.832 }, 00:24:10.832 "driver_specific": { 00:24:10.832 "nvme": [ 00:24:10.832 { 00:24:10.832 "trid": { 00:24:10.832 "trtype": "TCP", 00:24:10.832 "adrfam": "IPv4", 00:24:10.832 "traddr": "10.0.0.2", 00:24:10.832 "trsvcid": "4420", 00:24:10.832 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:10.832 }, 00:24:10.832 "ctrlr_data": { 00:24:10.832 "cntlid": 1, 00:24:10.832 "vendor_id": "0x8086", 00:24:10.832 "model_number": "SPDK bdev Controller", 00:24:10.832 "serial_number": "SPDK0", 00:24:10.832 "firmware_revision": "24.01.1", 00:24:10.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.832 "oacs": { 00:24:10.832 "security": 0, 00:24:10.832 "format": 0, 00:24:10.832 "firmware": 0, 00:24:10.832 "ns_manage": 0 00:24:10.832 }, 00:24:10.832 "multi_ctrlr": true, 00:24:10.832 "ana_reporting": false 00:24:10.832 }, 00:24:10.832 "vs": { 00:24:10.832 "nvme_version": "1.3" 00:24:10.832 }, 00:24:10.832 "ns_data": { 00:24:10.832 "id": 1, 00:24:10.832 "can_share": true 00:24:10.832 } 00:24:10.832 } 00:24:10.832 ], 00:24:10.832 "mp_policy": "active_passive" 00:24:10.832 } 00:24:10.832 } 00:24:10.832 ] 00:24:10.832 16:01:13 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=61005 00:24:10.832 16:01:13 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:10.832 16:01:13 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:11.090 Running I/O for 10 seconds... 00:24:12.024 Latency(us) 00:24:12.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:12.024 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:24:12.024 =================================================================================================================== 00:24:12.024 Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:24:12.024 00:24:12.959 16:01:15 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:13.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:13.217 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:24:13.217 =================================================================================================================== 00:24:13.217 Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:24:13.217 00:24:13.217 true 00:24:13.217 16:01:15 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:13.217 16:01:15 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:13.783 16:01:16 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:13.783 16:01:16 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:13.783 16:01:16 -- target/nvmf_lvs_grow.sh@65 -- # wait 61005 00:24:14.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:14.042 Nvme0n1 : 3.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:24:14.042 =================================================================================================================== 00:24:14.042 Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:24:14.042 00:24:15.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:15.445 Nvme0n1 : 4.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:24:15.445 =================================================================================================================== 00:24:15.445 Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:24:15.445 00:24:16.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:16.018 Nvme0n1 : 5.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:24:16.018 =================================================================================================================== 00:24:16.018 Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:24:16.018 00:24:17.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:17.392 Nvme0n1 : 6.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:24:17.392 =================================================================================================================== 00:24:17.392 Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:24:17.392 00:24:18.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:18.326 Nvme0n1 : 7.00 6201.43 24.22 0.00 0.00 0.00 0.00 0.00 00:24:18.326 =================================================================================================================== 00:24:18.326 Total : 6201.43 24.22 0.00 0.00 0.00 0.00 0.00 00:24:18.326 00:24:19.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:19.261 Nvme0n1 : 8.00 6251.75 24.42 0.00 0.00 0.00 0.00 0.00 00:24:19.261 =================================================================================================================== 00:24:19.261 Total : 6251.75 24.42 0.00 0.00 0.00 0.00 0.00 00:24:19.261 00:24:20.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:20.195 Nvme0n1 : 9.00 6248.56 24.41 0.00 0.00 0.00 0.00 0.00 00:24:20.195 =================================================================================================================== 00:24:20.195 Total : 6248.56 24.41 0.00 0.00 0.00 0.00 0.00 00:24:20.195 00:24:21.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:21.129 Nvme0n1 : 10.00 6284.10 24.55 0.00 0.00 0.00 0.00 0.00 00:24:21.129 =================================================================================================================== 00:24:21.129 Total : 6284.10 24.55 0.00 0.00 0.00 0.00 0.00 00:24:21.129 00:24:21.129 00:24:21.129 Latency(us) 00:24:21.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:21.129 Nvme0n1 : 10.02 6286.41 24.56 0.00 0.00 20354.46 8698.41 222107.46 00:24:21.129 =================================================================================================================== 00:24:21.129 Total : 6286.41 24.56 0.00 0.00 20354.46 8698.41 222107.46 00:24:21.129 0 00:24:21.129 16:01:23 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60989 00:24:21.129 16:01:23 -- common/autotest_common.sh@926 -- # '[' -z 60989 ']' 00:24:21.129 16:01:23 -- common/autotest_common.sh@930 -- # kill -0 60989 00:24:21.129 16:01:23 -- common/autotest_common.sh@931 -- # uname 00:24:21.129 16:01:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:21.129 16:01:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60989 00:24:21.129 killing process with pid 60989 00:24:21.129 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.129 00:24:21.129 Latency(us) 00:24:21.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.129 =================================================================================================================== 00:24:21.129 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.129 16:01:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:21.129 16:01:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:21.129 16:01:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60989' 00:24:21.129 16:01:23 -- common/autotest_common.sh@945 -- # kill 60989 00:24:21.129 16:01:23 -- common/autotest_common.sh@950 -- # wait 60989 00:24:21.386 16:01:24 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:21.644 16:01:24 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:21.644 16:01:24 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:24:21.901 16:01:24 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:24:21.901 16:01:24 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:24:21.901 16:01:24 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60623 00:24:21.901 16:01:24 -- target/nvmf_lvs_grow.sh@74 -- # wait 60623 00:24:21.901 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60623 Killed "${NVMF_APP[@]}" "$@" 00:24:21.901 16:01:24 -- target/nvmf_lvs_grow.sh@74 -- # true 00:24:21.901 16:01:24 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:24:21.901 16:01:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:21.901 16:01:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:21.901 16:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:21.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.901 16:01:24 -- nvmf/common.sh@469 -- # nvmfpid=61133 00:24:21.901 16:01:24 -- nvmf/common.sh@470 -- # waitforlisten 61133 00:24:21.901 16:01:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:21.901 16:01:24 -- common/autotest_common.sh@819 -- # '[' -z 61133 ']' 00:24:21.901 16:01:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.901 16:01:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:21.901 16:01:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.901 16:01:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:21.901 16:01:24 -- common/autotest_common.sh@10 -- # set +x 00:24:21.901 [2024-07-22 16:01:24.716609] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:21.901 [2024-07-22 16:01:24.716744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.159 [2024-07-22 16:01:24.856877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.159 [2024-07-22 16:01:24.915523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:22.159 [2024-07-22 16:01:24.915675] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.159 [2024-07-22 16:01:24.915690] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.159 [2024-07-22 16:01:24.915699] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.159 [2024-07-22 16:01:24.915731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.093 16:01:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:23.093 16:01:25 -- common/autotest_common.sh@852 -- # return 0 00:24:23.093 16:01:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:23.093 16:01:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:23.093 16:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:23.093 16:01:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.093 16:01:25 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:23.351 [2024-07-22 16:01:25.981072] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:23.351 [2024-07-22 16:01:25.981357] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:23.351 [2024-07-22 16:01:25.981530] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:23.351 16:01:26 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:24:23.351 16:01:26 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1 00:24:23.351 16:01:26 -- common/autotest_common.sh@887 -- # local bdev_name=eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1 00:24:23.351 16:01:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:23.351 16:01:26 -- common/autotest_common.sh@889 -- # local i 00:24:23.351 16:01:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:23.351 16:01:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:23.351 16:01:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:23.609 16:01:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1 -t 2000 00:24:23.867 [ 00:24:23.867 { 00:24:23.867 "name": "eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1", 00:24:23.867 "aliases": [ 00:24:23.867 "lvs/lvol" 00:24:23.867 ], 00:24:23.867 "product_name": "Logical Volume", 00:24:23.867 "block_size": 4096, 00:24:23.867 "num_blocks": 38912, 00:24:23.867 "uuid": "eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1", 00:24:23.867 "assigned_rate_limits": { 00:24:23.867 "rw_ios_per_sec": 0, 00:24:23.867 "rw_mbytes_per_sec": 0, 00:24:23.867 "r_mbytes_per_sec": 0, 00:24:23.867 "w_mbytes_per_sec": 0 00:24:23.867 }, 00:24:23.867 "claimed": false, 00:24:23.867 "zoned": false, 00:24:23.867 "supported_io_types": { 00:24:23.867 "read": true, 00:24:23.867 "write": true, 00:24:23.867 "unmap": true, 00:24:23.867 "write_zeroes": true, 00:24:23.867 "flush": false, 00:24:23.867 "reset": true, 00:24:23.867 "compare": false, 00:24:23.867 "compare_and_write": false, 00:24:23.867 "abort": false, 00:24:23.867 "nvme_admin": false, 00:24:23.867 "nvme_io": false 00:24:23.867 }, 00:24:23.867 "driver_specific": { 00:24:23.867 "lvol": { 00:24:23.867 "lvol_store_uuid": "4f646da7-0df4-4a4c-ab42-8df3abdf5c8e", 00:24:23.867 "base_bdev": "aio_bdev", 00:24:23.867 "thin_provision": false, 00:24:23.867 "snapshot": false, 00:24:23.867 "clone": false, 00:24:23.867 "esnap_clone": false 00:24:23.867 } 00:24:23.867 } 00:24:23.867 } 00:24:23.867 ] 00:24:23.867 16:01:26 -- common/autotest_common.sh@895 -- # return 0 00:24:23.867 16:01:26 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:23.867 16:01:26 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:24:24.434 16:01:27 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:24:24.434 16:01:27 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:24.434 16:01:27 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:24:24.692 16:01:27 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:24:24.692 16:01:27 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:24.950 [2024-07-22 16:01:27.655164] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:24.950 16:01:27 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:24.950 16:01:27 -- common/autotest_common.sh@640 -- # local es=0 00:24:24.950 16:01:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:24.950 16:01:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:24.950 16:01:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:24.950 16:01:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:24.950 16:01:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:24.950 16:01:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:24.950 16:01:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:24.950 16:01:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:24.950 16:01:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:24.950 16:01:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:25.209 request: 00:24:25.209 { 00:24:25.209 "uuid": "4f646da7-0df4-4a4c-ab42-8df3abdf5c8e", 00:24:25.209 "method": "bdev_lvol_get_lvstores", 00:24:25.209 "req_id": 1 00:24:25.209 } 00:24:25.209 Got JSON-RPC error response 00:24:25.209 response: 00:24:25.209 { 00:24:25.209 "code": -19, 00:24:25.209 "message": "No such device" 00:24:25.209 } 00:24:25.209 16:01:27 -- common/autotest_common.sh@643 -- # es=1 00:24:25.209 16:01:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:25.209 16:01:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:25.209 16:01:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:25.209 16:01:27 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:25.468 aio_bdev 00:24:25.468 16:01:28 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1 00:24:25.468 16:01:28 -- common/autotest_common.sh@887 -- # local bdev_name=eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1 00:24:25.468 16:01:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:25.468 16:01:28 -- common/autotest_common.sh@889 -- # local i 00:24:25.468 16:01:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:25.468 16:01:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:25.468 16:01:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:25.726 16:01:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1 -t 2000 00:24:25.985 [ 00:24:25.985 { 00:24:25.985 "name": "eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1", 00:24:25.985 "aliases": [ 00:24:25.985 "lvs/lvol" 00:24:25.985 ], 00:24:25.985 "product_name": "Logical Volume", 00:24:25.985 "block_size": 4096, 00:24:25.985 "num_blocks": 38912, 00:24:25.985 "uuid": "eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1", 00:24:25.985 "assigned_rate_limits": { 00:24:25.985 "rw_ios_per_sec": 0, 00:24:25.985 "rw_mbytes_per_sec": 0, 00:24:25.985 "r_mbytes_per_sec": 0, 00:24:25.985 "w_mbytes_per_sec": 0 00:24:25.985 }, 00:24:25.985 "claimed": false, 00:24:25.985 "zoned": false, 00:24:25.985 "supported_io_types": { 00:24:25.985 "read": true, 00:24:25.985 "write": true, 00:24:25.985 "unmap": true, 00:24:25.985 "write_zeroes": true, 00:24:25.985 "flush": false, 00:24:25.985 "reset": true, 00:24:25.985 "compare": false, 00:24:25.985 "compare_and_write": false, 00:24:25.985 "abort": false, 00:24:25.985 "nvme_admin": false, 00:24:25.985 "nvme_io": false 00:24:25.985 }, 00:24:25.985 "driver_specific": { 00:24:25.985 "lvol": { 00:24:25.985 "lvol_store_uuid": "4f646da7-0df4-4a4c-ab42-8df3abdf5c8e", 00:24:25.985 "base_bdev": "aio_bdev", 00:24:25.985 "thin_provision": false, 00:24:25.985 "snapshot": false, 00:24:25.985 "clone": false, 00:24:25.985 "esnap_clone": false 00:24:25.985 } 00:24:25.985 } 00:24:25.985 } 00:24:25.985 ] 00:24:25.985 16:01:28 -- common/autotest_common.sh@895 -- # return 0 00:24:25.985 16:01:28 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:25.985 16:01:28 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:24:26.253 16:01:28 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:24:26.253 16:01:28 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:26.253 16:01:28 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:24:26.511 16:01:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:24:26.511 16:01:29 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete eaa878f0-076c-44c1-b7d1-c8b8ee25e3f1 00:24:26.770 16:01:29 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f646da7-0df4-4a4c-ab42-8df3abdf5c8e 00:24:27.029 16:01:29 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:27.287 16:01:29 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:27.544 ************************************ 00:24:27.545 END TEST lvs_grow_dirty 00:24:27.545 ************************************ 00:24:27.545 00:24:27.545 real 0m20.456s 00:24:27.545 user 0m43.492s 00:24:27.545 sys 0m7.935s 00:24:27.545 16:01:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.545 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:27.545 16:01:30 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:24:27.545 16:01:30 -- common/autotest_common.sh@796 -- # type=--id 00:24:27.545 16:01:30 -- common/autotest_common.sh@797 -- # id=0 00:24:27.545 16:01:30 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:24:27.545 16:01:30 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:27.545 16:01:30 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:24:27.545 16:01:30 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:24:27.545 16:01:30 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:24:27.545 16:01:30 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:27.545 nvmf_trace.0 00:24:27.545 16:01:30 -- common/autotest_common.sh@811 -- # return 0 00:24:27.545 16:01:30 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:24:27.545 16:01:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:27.545 16:01:30 -- nvmf/common.sh@116 -- # sync 00:24:27.803 16:01:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:27.803 16:01:30 -- nvmf/common.sh@119 -- # set +e 00:24:27.803 16:01:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:27.803 16:01:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:27.803 rmmod nvme_tcp 00:24:27.803 rmmod nvme_fabrics 00:24:27.803 rmmod nvme_keyring 00:24:27.803 16:01:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:27.803 16:01:30 -- nvmf/common.sh@123 -- # set -e 00:24:27.803 16:01:30 -- nvmf/common.sh@124 -- # return 0 00:24:27.803 16:01:30 -- nvmf/common.sh@477 -- # '[' -n 61133 ']' 00:24:27.803 16:01:30 -- nvmf/common.sh@478 -- # killprocess 61133 00:24:27.803 16:01:30 -- common/autotest_common.sh@926 -- # '[' -z 61133 ']' 00:24:27.803 16:01:30 -- common/autotest_common.sh@930 -- # kill -0 61133 00:24:27.803 16:01:30 -- common/autotest_common.sh@931 -- # uname 00:24:27.803 16:01:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:27.803 16:01:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61133 00:24:27.803 killing process with pid 61133 00:24:27.803 16:01:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:27.803 16:01:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:27.803 16:01:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61133' 00:24:27.803 16:01:30 -- common/autotest_common.sh@945 -- # kill 61133 00:24:27.803 16:01:30 -- common/autotest_common.sh@950 -- # wait 61133 00:24:28.060 16:01:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:28.060 16:01:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:28.060 16:01:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:28.060 16:01:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.060 16:01:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:28.060 16:01:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.060 16:01:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.060 16:01:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.060 16:01:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:28.060 ************************************ 00:24:28.060 END TEST nvmf_lvs_grow 00:24:28.060 ************************************ 00:24:28.060 00:24:28.060 real 0m41.864s 00:24:28.060 user 1m8.587s 00:24:28.060 sys 0m11.148s 00:24:28.060 16:01:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.060 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:28.060 16:01:30 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:24:28.060 16:01:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:28.060 16:01:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:28.060 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:24:28.060 ************************************ 00:24:28.060 START TEST nvmf_bdev_io_wait 00:24:28.060 ************************************ 00:24:28.060 16:01:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:24:28.060 * Looking for test storage... 00:24:28.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:28.060 16:01:30 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.060 16:01:30 -- nvmf/common.sh@7 -- # uname -s 00:24:28.060 16:01:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.060 16:01:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.060 16:01:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.060 16:01:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.060 16:01:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.060 16:01:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.061 16:01:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.061 16:01:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.061 16:01:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.061 16:01:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.319 16:01:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:24:28.319 16:01:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:24:28.319 16:01:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.319 16:01:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.319 16:01:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.319 16:01:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.319 16:01:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.319 16:01:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.319 16:01:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.319 16:01:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.319 16:01:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.319 16:01:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.319 16:01:30 -- paths/export.sh@5 -- # export PATH 00:24:28.319 16:01:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.319 16:01:30 -- nvmf/common.sh@46 -- # : 0 00:24:28.319 16:01:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:28.319 16:01:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:28.319 16:01:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:28.319 16:01:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.319 16:01:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.319 16:01:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:28.319 16:01:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:28.319 16:01:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:28.319 16:01:30 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.319 16:01:30 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.319 16:01:30 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:24:28.319 16:01:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:28.319 16:01:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.319 16:01:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:28.319 16:01:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:28.319 16:01:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:28.319 16:01:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.319 16:01:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.319 16:01:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.319 16:01:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:28.319 16:01:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:28.319 16:01:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:28.319 16:01:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:28.319 16:01:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:28.319 16:01:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:28.319 16:01:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.319 16:01:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.319 16:01:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:28.319 16:01:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:28.319 16:01:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.319 16:01:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.319 16:01:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.319 16:01:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.319 16:01:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.319 16:01:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.319 16:01:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.319 16:01:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.319 16:01:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:28.320 16:01:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:28.320 Cannot find device "nvmf_tgt_br" 00:24:28.320 16:01:30 -- nvmf/common.sh@154 -- # true 00:24:28.320 16:01:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.320 Cannot find device "nvmf_tgt_br2" 00:24:28.320 16:01:30 -- nvmf/common.sh@155 -- # true 00:24:28.320 16:01:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:28.320 16:01:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:28.320 Cannot find device "nvmf_tgt_br" 00:24:28.320 16:01:31 -- nvmf/common.sh@157 -- # true 00:24:28.320 16:01:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:28.320 Cannot find device "nvmf_tgt_br2" 00:24:28.320 16:01:31 -- nvmf/common.sh@158 -- # true 00:24:28.320 16:01:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:28.320 16:01:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:28.320 16:01:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.320 16:01:31 -- nvmf/common.sh@161 -- # true 00:24:28.320 16:01:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.320 16:01:31 -- nvmf/common.sh@162 -- # true 00:24:28.320 16:01:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.320 16:01:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.320 16:01:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.320 16:01:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.320 16:01:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.320 16:01:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.320 16:01:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.320 16:01:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:28.320 16:01:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:28.320 16:01:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:28.320 16:01:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:28.320 16:01:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:28.320 16:01:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:28.320 16:01:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.579 16:01:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.579 16:01:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.579 16:01:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:28.579 16:01:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:28.579 16:01:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.579 16:01:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.579 16:01:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.579 16:01:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.579 16:01:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.579 16:01:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:28.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:24:28.579 00:24:28.579 --- 10.0.0.2 ping statistics --- 00:24:28.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.579 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:28.579 16:01:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:28.579 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.579 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:24:28.579 00:24:28.579 --- 10.0.0.3 ping statistics --- 00:24:28.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.579 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:28.579 16:01:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:28.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:24:28.579 00:24:28.579 --- 10.0.0.1 ping statistics --- 00:24:28.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.579 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:28.579 16:01:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.579 16:01:31 -- nvmf/common.sh@421 -- # return 0 00:24:28.579 16:01:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:28.579 16:01:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.579 16:01:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:28.579 16:01:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:28.579 16:01:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.579 16:01:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:28.579 16:01:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:28.579 16:01:31 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:28.579 16:01:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:28.579 16:01:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:28.579 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:28.579 16:01:31 -- nvmf/common.sh@469 -- # nvmfpid=61451 00:24:28.579 16:01:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:28.579 16:01:31 -- nvmf/common.sh@470 -- # waitforlisten 61451 00:24:28.579 16:01:31 -- common/autotest_common.sh@819 -- # '[' -z 61451 ']' 00:24:28.579 16:01:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.579 16:01:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:28.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.579 16:01:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.579 16:01:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:28.579 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:28.579 [2024-07-22 16:01:31.354783] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:28.579 [2024-07-22 16:01:31.354869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.838 [2024-07-22 16:01:31.493570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.838 [2024-07-22 16:01:31.562143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:28.838 [2024-07-22 16:01:31.562521] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.838 [2024-07-22 16:01:31.562659] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.838 [2024-07-22 16:01:31.562816] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.838 [2024-07-22 16:01:31.566523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.838 [2024-07-22 16:01:31.566949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.838 [2024-07-22 16:01:31.566861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.838 [2024-07-22 16:01:31.566929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.838 16:01:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:28.838 16:01:31 -- common/autotest_common.sh@852 -- # return 0 00:24:28.838 16:01:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:28.838 16:01:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:28.838 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:28.838 16:01:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.838 16:01:31 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:24:28.838 16:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.838 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:28.838 16:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.838 16:01:31 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:24:28.838 16:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.838 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:28.838 16:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.838 16:01:31 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:28.838 16:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.838 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:29.098 [2024-07-22 16:01:31.704570] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.098 16:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.098 16:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.098 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:29.098 Malloc0 00:24:29.098 16:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.098 16:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.098 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:29.098 16:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.098 16:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.098 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:29.098 16:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.098 16:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.098 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:24:29.098 [2024-07-22 16:01:31.757528] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.098 16:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61479 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@30 -- # READ_PID=61481 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:24:29.098 16:01:31 -- nvmf/common.sh@520 -- # config=() 00:24:29.098 16:01:31 -- nvmf/common.sh@520 -- # local subsystem config 00:24:29.098 16:01:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.098 16:01:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.098 { 00:24:29.098 "params": { 00:24:29.098 "name": "Nvme$subsystem", 00:24:29.098 "trtype": "$TEST_TRANSPORT", 00:24:29.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.098 "adrfam": "ipv4", 00:24:29.098 "trsvcid": "$NVMF_PORT", 00:24:29.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.098 "hdgst": ${hdgst:-false}, 00:24:29.098 "ddgst": ${ddgst:-false} 00:24:29.098 }, 00:24:29.098 "method": "bdev_nvme_attach_controller" 00:24:29.098 } 00:24:29.098 EOF 00:24:29.098 )") 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61483 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:24:29.098 16:01:31 -- nvmf/common.sh@520 -- # config=() 00:24:29.098 16:01:31 -- nvmf/common.sh@520 -- # local subsystem config 00:24:29.098 16:01:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.098 16:01:31 -- nvmf/common.sh@542 -- # cat 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:24:29.098 16:01:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.098 { 00:24:29.098 "params": { 00:24:29.098 "name": "Nvme$subsystem", 00:24:29.098 "trtype": "$TEST_TRANSPORT", 00:24:29.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.098 "adrfam": "ipv4", 00:24:29.098 "trsvcid": "$NVMF_PORT", 00:24:29.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.098 "hdgst": ${hdgst:-false}, 00:24:29.098 "ddgst": ${ddgst:-false} 00:24:29.098 }, 00:24:29.098 "method": "bdev_nvme_attach_controller" 00:24:29.098 } 00:24:29.098 EOF 00:24:29.098 )") 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:24:29.098 16:01:31 -- nvmf/common.sh@542 -- # cat 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:24:29.098 16:01:31 -- nvmf/common.sh@520 -- # config=() 00:24:29.098 16:01:31 -- nvmf/common.sh@520 -- # local subsystem config 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61486 00:24:29.098 16:01:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@35 -- # sync 00:24:29.098 16:01:31 -- nvmf/common.sh@544 -- # jq . 00:24:29.098 16:01:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.098 { 00:24:29.098 "params": { 00:24:29.098 "name": "Nvme$subsystem", 00:24:29.098 "trtype": "$TEST_TRANSPORT", 00:24:29.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.098 "adrfam": "ipv4", 00:24:29.098 "trsvcid": "$NVMF_PORT", 00:24:29.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.098 "hdgst": ${hdgst:-false}, 00:24:29.098 "ddgst": ${ddgst:-false} 00:24:29.098 }, 00:24:29.098 "method": "bdev_nvme_attach_controller" 00:24:29.098 } 00:24:29.098 EOF 00:24:29.098 )") 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:24:29.098 16:01:31 -- nvmf/common.sh@520 -- # config=() 00:24:29.098 16:01:31 -- nvmf/common.sh@520 -- # local subsystem config 00:24:29.098 16:01:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:29.098 16:01:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:29.098 { 00:24:29.098 "params": { 00:24:29.098 "name": "Nvme$subsystem", 00:24:29.098 "trtype": "$TEST_TRANSPORT", 00:24:29.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.098 "adrfam": "ipv4", 00:24:29.098 "trsvcid": "$NVMF_PORT", 00:24:29.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.098 "hdgst": ${hdgst:-false}, 00:24:29.098 "ddgst": ${ddgst:-false} 00:24:29.098 }, 00:24:29.098 "method": "bdev_nvme_attach_controller" 00:24:29.098 } 00:24:29.098 EOF 00:24:29.098 )") 00:24:29.098 16:01:31 -- nvmf/common.sh@542 -- # cat 00:24:29.098 16:01:31 -- nvmf/common.sh@545 -- # IFS=, 00:24:29.098 16:01:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:29.098 "params": { 00:24:29.098 "name": "Nvme1", 00:24:29.098 "trtype": "tcp", 00:24:29.098 "traddr": "10.0.0.2", 00:24:29.098 "adrfam": "ipv4", 00:24:29.098 "trsvcid": "4420", 00:24:29.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.098 "hdgst": false, 00:24:29.098 "ddgst": false 00:24:29.098 }, 00:24:29.098 "method": "bdev_nvme_attach_controller" 00:24:29.098 }' 00:24:29.098 16:01:31 -- nvmf/common.sh@544 -- # jq . 00:24:29.098 16:01:31 -- nvmf/common.sh@542 -- # cat 00:24:29.098 16:01:31 -- nvmf/common.sh@545 -- # IFS=, 00:24:29.098 16:01:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:29.098 "params": { 00:24:29.098 "name": "Nvme1", 00:24:29.098 "trtype": "tcp", 00:24:29.098 "traddr": "10.0.0.2", 00:24:29.098 "adrfam": "ipv4", 00:24:29.098 "trsvcid": "4420", 00:24:29.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.098 "hdgst": false, 00:24:29.098 "ddgst": false 00:24:29.098 }, 00:24:29.098 "method": "bdev_nvme_attach_controller" 00:24:29.098 }' 00:24:29.098 16:01:31 -- nvmf/common.sh@544 -- # jq . 00:24:29.098 16:01:31 -- nvmf/common.sh@545 -- # IFS=, 00:24:29.098 16:01:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:29.098 "params": { 00:24:29.098 "name": "Nvme1", 00:24:29.098 "trtype": "tcp", 00:24:29.098 "traddr": "10.0.0.2", 00:24:29.098 "adrfam": "ipv4", 00:24:29.098 "trsvcid": "4420", 00:24:29.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.098 "hdgst": false, 00:24:29.098 "ddgst": false 00:24:29.098 }, 00:24:29.098 "method": "bdev_nvme_attach_controller" 00:24:29.098 }' 00:24:29.098 16:01:31 -- nvmf/common.sh@544 -- # jq . 00:24:29.098 16:01:31 -- nvmf/common.sh@545 -- # IFS=, 00:24:29.098 16:01:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:29.098 "params": { 00:24:29.098 "name": "Nvme1", 00:24:29.098 "trtype": "tcp", 00:24:29.098 "traddr": "10.0.0.2", 00:24:29.098 "adrfam": "ipv4", 00:24:29.098 "trsvcid": "4420", 00:24:29.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.098 "hdgst": false, 00:24:29.098 "ddgst": false 00:24:29.098 }, 00:24:29.098 "method": "bdev_nvme_attach_controller" 00:24:29.098 }' 00:24:29.098 16:01:31 -- target/bdev_io_wait.sh@37 -- # wait 61479 00:24:29.098 [2024-07-22 16:01:31.826422] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:29.098 [2024-07-22 16:01:31.826710] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:29.098 [2024-07-22 16:01:31.834372] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:29.098 [2024-07-22 16:01:31.835314] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:24:29.098 [2024-07-22 16:01:31.852446] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:29.099 [2024-07-22 16:01:31.852827] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:24:29.099 [2024-07-22 16:01:31.853905] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:29.099 [2024-07-22 16:01:31.854178] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:24:29.357 [2024-07-22 16:01:31.992387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.357 [2024-07-22 16:01:32.036444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:29.357 [2024-07-22 16:01:32.044937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.357 [2024-07-22 16:01:32.095739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:29.357 [2024-07-22 16:01:32.110648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.357 Running I/O for 1 seconds... 00:24:29.357 [2024-07-22 16:01:32.155672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.357 [2024-07-22 16:01:32.176164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:29.357 Running I/O for 1 seconds... 00:24:29.614 [2024-07-22 16:01:32.223333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:29.614 Running I/O for 1 seconds... 00:24:29.614 Running I/O for 1 seconds... 00:24:30.548 00:24:30.548 Latency(us) 00:24:30.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.548 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:24:30.548 Nvme1n1 : 1.02 4240.51 16.56 0.00 0.00 29660.09 9055.88 59578.18 00:24:30.548 =================================================================================================================== 00:24:30.548 Total : 4240.51 16.56 0.00 0.00 29660.09 9055.88 59578.18 00:24:30.548 00:24:30.548 Latency(us) 00:24:30.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.548 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:24:30.548 Nvme1n1 : 1.00 157005.74 613.30 0.00 0.00 812.42 348.16 1146.88 00:24:30.548 =================================================================================================================== 00:24:30.548 Total : 157005.74 613.30 0.00 0.00 812.42 348.16 1146.88 00:24:30.548 00:24:30.548 Latency(us) 00:24:30.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.548 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:24:30.548 Nvme1n1 : 1.01 9318.38 36.40 0.00 0.00 13671.97 7566.43 29431.62 00:24:30.548 =================================================================================================================== 00:24:30.548 Total : 9318.38 36.40 0.00 0.00 13671.97 7566.43 29431.62 00:24:30.548 00:24:30.548 Latency(us) 00:24:30.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.548 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:24:30.548 Nvme1n1 : 1.01 4636.50 18.11 0.00 0.00 27479.71 7923.90 70540.57 00:24:30.548 =================================================================================================================== 00:24:30.548 Total : 4636.50 18.11 0.00 0.00 27479.71 7923.90 70540.57 00:24:30.548 16:01:33 -- target/bdev_io_wait.sh@38 -- # wait 61481 00:24:30.806 16:01:33 -- target/bdev_io_wait.sh@39 -- # wait 61483 00:24:30.806 16:01:33 -- target/bdev_io_wait.sh@40 -- # wait 61486 00:24:30.806 16:01:33 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.806 16:01:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:30.806 16:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:30.806 16:01:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:30.806 16:01:33 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:24:30.806 16:01:33 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:24:30.806 16:01:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:30.806 16:01:33 -- nvmf/common.sh@116 -- # sync 00:24:30.806 16:01:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:30.806 16:01:33 -- nvmf/common.sh@119 -- # set +e 00:24:30.806 16:01:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:30.806 16:01:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:30.806 rmmod nvme_tcp 00:24:30.806 rmmod nvme_fabrics 00:24:30.806 rmmod nvme_keyring 00:24:30.806 16:01:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:30.806 16:01:33 -- nvmf/common.sh@123 -- # set -e 00:24:30.806 16:01:33 -- nvmf/common.sh@124 -- # return 0 00:24:30.806 16:01:33 -- nvmf/common.sh@477 -- # '[' -n 61451 ']' 00:24:30.806 16:01:33 -- nvmf/common.sh@478 -- # killprocess 61451 00:24:30.806 16:01:33 -- common/autotest_common.sh@926 -- # '[' -z 61451 ']' 00:24:30.806 16:01:33 -- common/autotest_common.sh@930 -- # kill -0 61451 00:24:30.806 16:01:33 -- common/autotest_common.sh@931 -- # uname 00:24:30.806 16:01:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:30.806 16:01:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61451 00:24:31.064 16:01:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:31.064 killing process with pid 61451 00:24:31.064 16:01:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:31.064 16:01:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61451' 00:24:31.064 16:01:33 -- common/autotest_common.sh@945 -- # kill 61451 00:24:31.064 16:01:33 -- common/autotest_common.sh@950 -- # wait 61451 00:24:31.064 16:01:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:31.064 16:01:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:31.064 16:01:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:31.064 16:01:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.064 16:01:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:31.064 16:01:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.064 16:01:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.064 16:01:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.064 16:01:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:31.064 ************************************ 00:24:31.064 END TEST nvmf_bdev_io_wait 00:24:31.064 ************************************ 00:24:31.064 00:24:31.064 real 0m3.067s 00:24:31.064 user 0m13.632s 00:24:31.064 sys 0m1.831s 00:24:31.064 16:01:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:31.064 16:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:31.323 16:01:33 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:31.323 16:01:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:31.323 16:01:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:31.323 16:01:33 -- common/autotest_common.sh@10 -- # set +x 00:24:31.323 ************************************ 00:24:31.323 START TEST nvmf_queue_depth 00:24:31.323 ************************************ 00:24:31.323 16:01:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:31.323 * Looking for test storage... 00:24:31.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:31.323 16:01:34 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:31.323 16:01:34 -- nvmf/common.sh@7 -- # uname -s 00:24:31.323 16:01:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.323 16:01:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.323 16:01:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.323 16:01:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.323 16:01:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.323 16:01:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.323 16:01:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.323 16:01:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.323 16:01:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.323 16:01:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.323 16:01:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:24:31.323 16:01:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:24:31.323 16:01:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.323 16:01:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.323 16:01:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:31.323 16:01:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:31.323 16:01:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.323 16:01:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.323 16:01:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.323 16:01:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.323 16:01:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.323 16:01:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.323 16:01:34 -- paths/export.sh@5 -- # export PATH 00:24:31.323 16:01:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.323 16:01:34 -- nvmf/common.sh@46 -- # : 0 00:24:31.323 16:01:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:31.323 16:01:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:31.323 16:01:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:31.323 16:01:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.323 16:01:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.323 16:01:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:31.323 16:01:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:31.323 16:01:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:31.323 16:01:34 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:24:31.323 16:01:34 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:24:31.323 16:01:34 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.323 16:01:34 -- target/queue_depth.sh@19 -- # nvmftestinit 00:24:31.323 16:01:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:31.323 16:01:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.323 16:01:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:31.323 16:01:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:31.323 16:01:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:31.323 16:01:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.323 16:01:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.323 16:01:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.323 16:01:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:31.323 16:01:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:31.323 16:01:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:31.323 16:01:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:31.323 16:01:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:31.323 16:01:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:31.323 16:01:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.323 16:01:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.323 16:01:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:31.323 16:01:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:31.323 16:01:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:31.323 16:01:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:31.323 16:01:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:31.323 16:01:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.323 16:01:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:31.323 16:01:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:31.323 16:01:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:31.323 16:01:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:31.323 16:01:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:31.323 16:01:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:31.323 Cannot find device "nvmf_tgt_br" 00:24:31.323 16:01:34 -- nvmf/common.sh@154 -- # true 00:24:31.323 16:01:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:31.323 Cannot find device "nvmf_tgt_br2" 00:24:31.323 16:01:34 -- nvmf/common.sh@155 -- # true 00:24:31.323 16:01:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:31.323 16:01:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:31.323 Cannot find device "nvmf_tgt_br" 00:24:31.323 16:01:34 -- nvmf/common.sh@157 -- # true 00:24:31.323 16:01:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:31.323 Cannot find device "nvmf_tgt_br2" 00:24:31.323 16:01:34 -- nvmf/common.sh@158 -- # true 00:24:31.323 16:01:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:31.323 16:01:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:31.599 16:01:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:31.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:31.599 16:01:34 -- nvmf/common.sh@161 -- # true 00:24:31.599 16:01:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:31.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:31.599 16:01:34 -- nvmf/common.sh@162 -- # true 00:24:31.599 16:01:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:31.599 16:01:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:31.599 16:01:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:31.599 16:01:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:31.599 16:01:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:31.599 16:01:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:31.599 16:01:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:31.599 16:01:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:31.599 16:01:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:31.599 16:01:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:31.599 16:01:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:31.599 16:01:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:31.599 16:01:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:31.599 16:01:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:31.599 16:01:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:31.599 16:01:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:31.599 16:01:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:31.599 16:01:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:31.599 16:01:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:31.599 16:01:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:31.599 16:01:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:31.599 16:01:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:31.599 16:01:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:31.599 16:01:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:31.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:24:31.599 00:24:31.599 --- 10.0.0.2 ping statistics --- 00:24:31.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.599 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:31.599 16:01:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:31.599 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:31.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:24:31.599 00:24:31.599 --- 10.0.0.3 ping statistics --- 00:24:31.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.600 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:31.600 16:01:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:31.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:24:31.600 00:24:31.600 --- 10.0.0.1 ping statistics --- 00:24:31.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.600 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:24:31.600 16:01:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.600 16:01:34 -- nvmf/common.sh@421 -- # return 0 00:24:31.600 16:01:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:31.600 16:01:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.600 16:01:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:31.600 16:01:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:31.600 16:01:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.600 16:01:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:31.600 16:01:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:31.600 16:01:34 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:24:31.600 16:01:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:31.600 16:01:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:31.600 16:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:31.600 16:01:34 -- nvmf/common.sh@469 -- # nvmfpid=61684 00:24:31.600 16:01:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:31.600 16:01:34 -- nvmf/common.sh@470 -- # waitforlisten 61684 00:24:31.600 16:01:34 -- common/autotest_common.sh@819 -- # '[' -z 61684 ']' 00:24:31.600 16:01:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.600 16:01:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:31.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.600 16:01:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.600 16:01:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:31.600 16:01:34 -- common/autotest_common.sh@10 -- # set +x 00:24:31.857 [2024-07-22 16:01:34.478936] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:31.857 [2024-07-22 16:01:34.479039] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.857 [2024-07-22 16:01:34.615909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.857 [2024-07-22 16:01:34.679649] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:31.857 [2024-07-22 16:01:34.679800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.857 [2024-07-22 16:01:34.679815] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.857 [2024-07-22 16:01:34.679824] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.857 [2024-07-22 16:01:34.679856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.791 16:01:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:32.791 16:01:35 -- common/autotest_common.sh@852 -- # return 0 00:24:32.791 16:01:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:32.791 16:01:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:32.791 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:32.791 16:01:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.791 16:01:35 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.791 16:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.791 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:32.791 [2024-07-22 16:01:35.535682] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.791 16:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.791 16:01:35 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:32.791 16:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.791 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:32.791 Malloc0 00:24:32.791 16:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.791 16:01:35 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.791 16:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.791 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:32.791 16:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.791 16:01:35 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.791 16:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.791 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:32.791 16:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.791 16:01:35 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.791 16:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.791 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:32.791 [2024-07-22 16:01:35.594133] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.791 16:01:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.791 16:01:35 -- target/queue_depth.sh@30 -- # bdevperf_pid=61716 00:24:32.791 16:01:35 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:24:32.791 16:01:35 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.791 16:01:35 -- target/queue_depth.sh@33 -- # waitforlisten 61716 /var/tmp/bdevperf.sock 00:24:32.791 16:01:35 -- common/autotest_common.sh@819 -- # '[' -z 61716 ']' 00:24:32.791 16:01:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.791 16:01:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:32.791 16:01:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.791 16:01:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:32.791 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:33.057 [2024-07-22 16:01:35.656722] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:33.057 [2024-07-22 16:01:35.657033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61716 ] 00:24:33.057 [2024-07-22 16:01:35.800856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.057 [2024-07-22 16:01:35.859759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.316 16:01:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:33.316 16:01:35 -- common/autotest_common.sh@852 -- # return 0 00:24:33.316 16:01:35 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.316 16:01:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:33.316 16:01:35 -- common/autotest_common.sh@10 -- # set +x 00:24:33.316 NVMe0n1 00:24:33.316 16:01:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:33.316 16:01:36 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.316 Running I/O for 10 seconds... 00:24:45.516 00:24:45.516 Latency(us) 00:24:45.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.516 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:24:45.516 Verification LBA range: start 0x0 length 0x4000 00:24:45.516 NVMe0n1 : 10.07 13038.00 50.93 0.00 0.00 78200.96 16562.73 100567.97 00:24:45.516 =================================================================================================================== 00:24:45.516 Total : 13038.00 50.93 0.00 0.00 78200.96 16562.73 100567.97 00:24:45.516 0 00:24:45.516 16:01:46 -- target/queue_depth.sh@39 -- # killprocess 61716 00:24:45.516 16:01:46 -- common/autotest_common.sh@926 -- # '[' -z 61716 ']' 00:24:45.516 16:01:46 -- common/autotest_common.sh@930 -- # kill -0 61716 00:24:45.516 16:01:46 -- common/autotest_common.sh@931 -- # uname 00:24:45.516 16:01:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:45.516 16:01:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61716 00:24:45.516 killing process with pid 61716 00:24:45.516 Received shutdown signal, test time was about 10.000000 seconds 00:24:45.516 00:24:45.516 Latency(us) 00:24:45.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.516 =================================================================================================================== 00:24:45.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.516 16:01:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:45.516 16:01:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:45.516 16:01:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61716' 00:24:45.516 16:01:46 -- common/autotest_common.sh@945 -- # kill 61716 00:24:45.516 16:01:46 -- common/autotest_common.sh@950 -- # wait 61716 00:24:45.516 16:01:46 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:24:45.516 16:01:46 -- target/queue_depth.sh@43 -- # nvmftestfini 00:24:45.516 16:01:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:45.516 16:01:46 -- nvmf/common.sh@116 -- # sync 00:24:45.516 16:01:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:45.516 16:01:46 -- nvmf/common.sh@119 -- # set +e 00:24:45.516 16:01:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:45.516 16:01:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:45.516 rmmod nvme_tcp 00:24:45.516 rmmod nvme_fabrics 00:24:45.516 rmmod nvme_keyring 00:24:45.516 16:01:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:45.516 16:01:46 -- nvmf/common.sh@123 -- # set -e 00:24:45.516 16:01:46 -- nvmf/common.sh@124 -- # return 0 00:24:45.516 16:01:46 -- nvmf/common.sh@477 -- # '[' -n 61684 ']' 00:24:45.516 16:01:46 -- nvmf/common.sh@478 -- # killprocess 61684 00:24:45.516 16:01:46 -- common/autotest_common.sh@926 -- # '[' -z 61684 ']' 00:24:45.516 16:01:46 -- common/autotest_common.sh@930 -- # kill -0 61684 00:24:45.516 16:01:46 -- common/autotest_common.sh@931 -- # uname 00:24:45.516 16:01:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:45.516 16:01:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61684 00:24:45.516 killing process with pid 61684 00:24:45.516 16:01:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:45.516 16:01:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:45.516 16:01:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61684' 00:24:45.516 16:01:46 -- common/autotest_common.sh@945 -- # kill 61684 00:24:45.516 16:01:46 -- common/autotest_common.sh@950 -- # wait 61684 00:24:45.516 16:01:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:45.516 16:01:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:45.516 16:01:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:45.516 16:01:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.516 16:01:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:45.516 16:01:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.516 16:01:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.516 16:01:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.516 16:01:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:45.516 ************************************ 00:24:45.516 END TEST nvmf_queue_depth 00:24:45.516 ************************************ 00:24:45.516 00:24:45.516 real 0m12.877s 00:24:45.516 user 0m22.118s 00:24:45.516 sys 0m1.962s 00:24:45.516 16:01:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.516 16:01:46 -- common/autotest_common.sh@10 -- # set +x 00:24:45.517 16:01:46 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:24:45.517 16:01:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:45.517 16:01:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:45.517 16:01:46 -- common/autotest_common.sh@10 -- # set +x 00:24:45.517 ************************************ 00:24:45.517 START TEST nvmf_multipath 00:24:45.517 ************************************ 00:24:45.517 16:01:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:24:45.517 * Looking for test storage... 00:24:45.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:45.517 16:01:46 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:45.517 16:01:46 -- nvmf/common.sh@7 -- # uname -s 00:24:45.517 16:01:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.517 16:01:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.517 16:01:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.517 16:01:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.517 16:01:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.517 16:01:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.517 16:01:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.517 16:01:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.517 16:01:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.517 16:01:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.517 16:01:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:24:45.517 16:01:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:24:45.517 16:01:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.517 16:01:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.517 16:01:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:45.517 16:01:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:45.517 16:01:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.517 16:01:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.517 16:01:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.517 16:01:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.517 16:01:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.517 16:01:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.517 16:01:46 -- paths/export.sh@5 -- # export PATH 00:24:45.517 16:01:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.517 16:01:46 -- nvmf/common.sh@46 -- # : 0 00:24:45.517 16:01:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:45.517 16:01:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:45.517 16:01:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:45.517 16:01:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.517 16:01:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.517 16:01:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:45.517 16:01:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:45.517 16:01:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:45.517 16:01:46 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.517 16:01:46 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.517 16:01:46 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:45.517 16:01:46 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.517 16:01:46 -- target/multipath.sh@43 -- # nvmftestinit 00:24:45.517 16:01:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:45.517 16:01:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.517 16:01:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:45.517 16:01:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:45.517 16:01:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:45.517 16:01:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.517 16:01:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.517 16:01:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.517 16:01:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:45.517 16:01:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:45.517 16:01:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:45.517 16:01:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:45.517 16:01:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:45.517 16:01:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:45.517 16:01:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.517 16:01:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.517 16:01:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:45.517 16:01:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:45.517 16:01:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:45.517 16:01:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:45.517 16:01:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:45.517 16:01:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.517 16:01:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:45.517 16:01:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:45.517 16:01:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:45.517 16:01:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:45.517 16:01:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:45.517 16:01:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:45.517 Cannot find device "nvmf_tgt_br" 00:24:45.517 16:01:47 -- nvmf/common.sh@154 -- # true 00:24:45.517 16:01:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:45.517 Cannot find device "nvmf_tgt_br2" 00:24:45.517 16:01:47 -- nvmf/common.sh@155 -- # true 00:24:45.517 16:01:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:45.517 16:01:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:45.517 Cannot find device "nvmf_tgt_br" 00:24:45.517 16:01:47 -- nvmf/common.sh@157 -- # true 00:24:45.517 16:01:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:45.517 Cannot find device "nvmf_tgt_br2" 00:24:45.517 16:01:47 -- nvmf/common.sh@158 -- # true 00:24:45.517 16:01:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:45.517 16:01:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:45.517 16:01:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:45.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.517 16:01:47 -- nvmf/common.sh@161 -- # true 00:24:45.517 16:01:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:45.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.517 16:01:47 -- nvmf/common.sh@162 -- # true 00:24:45.517 16:01:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:45.517 16:01:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:45.517 16:01:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:45.517 16:01:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:45.517 16:01:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:45.517 16:01:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:45.517 16:01:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:45.517 16:01:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:45.517 16:01:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:45.517 16:01:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:45.517 16:01:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:45.517 16:01:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:45.517 16:01:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:45.517 16:01:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:45.517 16:01:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:45.517 16:01:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:45.517 16:01:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:45.517 16:01:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:45.518 16:01:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:45.518 16:01:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:45.518 16:01:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:45.518 16:01:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:45.518 16:01:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:45.518 16:01:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:45.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:24:45.518 00:24:45.518 --- 10.0.0.2 ping statistics --- 00:24:45.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.518 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:24:45.518 16:01:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:45.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:45.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:24:45.518 00:24:45.518 --- 10.0.0.3 ping statistics --- 00:24:45.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.518 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:45.518 16:01:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:45.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:24:45.518 00:24:45.518 --- 10.0.0.1 ping statistics --- 00:24:45.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.518 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:45.518 16:01:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.518 16:01:47 -- nvmf/common.sh@421 -- # return 0 00:24:45.518 16:01:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:45.518 16:01:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.518 16:01:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:45.518 16:01:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:45.518 16:01:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.518 16:01:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:45.518 16:01:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:45.518 16:01:47 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:24:45.518 16:01:47 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:24:45.518 16:01:47 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:24:45.518 16:01:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:45.518 16:01:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:45.518 16:01:47 -- common/autotest_common.sh@10 -- # set +x 00:24:45.518 16:01:47 -- nvmf/common.sh@469 -- # nvmfpid=62027 00:24:45.518 16:01:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:45.518 16:01:47 -- nvmf/common.sh@470 -- # waitforlisten 62027 00:24:45.518 16:01:47 -- common/autotest_common.sh@819 -- # '[' -z 62027 ']' 00:24:45.518 16:01:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.518 16:01:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:45.518 16:01:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.518 16:01:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:45.518 16:01:47 -- common/autotest_common.sh@10 -- # set +x 00:24:45.518 [2024-07-22 16:01:47.394164] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:45.518 [2024-07-22 16:01:47.394262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.518 [2024-07-22 16:01:47.532545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.518 [2024-07-22 16:01:47.594237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:45.518 [2024-07-22 16:01:47.594618] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.518 [2024-07-22 16:01:47.594744] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.518 [2024-07-22 16:01:47.594921] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.518 [2024-07-22 16:01:47.595144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.518 [2024-07-22 16:01:47.595221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.518 [2024-07-22 16:01:47.595289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.518 [2024-07-22 16:01:47.595289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.518 16:01:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:45.518 16:01:47 -- common/autotest_common.sh@852 -- # return 0 00:24:45.518 16:01:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:45.518 16:01:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:45.518 16:01:47 -- common/autotest_common.sh@10 -- # set +x 00:24:45.518 16:01:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.518 16:01:47 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:45.518 [2024-07-22 16:01:48.119830] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.518 16:01:48 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:45.776 Malloc0 00:24:45.776 16:01:48 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:24:46.034 16:01:48 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.291 16:01:48 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.548 [2024-07-22 16:01:49.216865] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.548 16:01:49 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:46.806 [2024-07-22 16:01:49.453112] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:46.806 16:01:49 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:24:46.806 16:01:49 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:24:47.064 16:01:49 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:24:47.064 16:01:49 -- common/autotest_common.sh@1177 -- # local i=0 00:24:47.064 16:01:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.064 16:01:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:47.064 16:01:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:48.963 16:01:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:48.963 16:01:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:48.963 16:01:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:24:48.963 16:01:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:48.963 16:01:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.963 16:01:51 -- common/autotest_common.sh@1187 -- # return 0 00:24:48.963 16:01:51 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:24:48.963 16:01:51 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:24:48.963 16:01:51 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:24:48.963 16:01:51 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:24:48.963 16:01:51 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:24:48.963 16:01:51 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:24:48.963 16:01:51 -- target/multipath.sh@38 -- # return 0 00:24:48.963 16:01:51 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:24:48.963 16:01:51 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:24:48.963 16:01:51 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:24:48.963 16:01:51 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:24:48.963 16:01:51 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:24:48.963 16:01:51 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:24:48.963 16:01:51 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:24:48.963 16:01:51 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:24:48.963 16:01:51 -- target/multipath.sh@22 -- # local timeout=20 00:24:48.963 16:01:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:48.963 16:01:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:48.963 16:01:51 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:48.963 16:01:51 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:24:48.963 16:01:51 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:24:48.963 16:01:51 -- target/multipath.sh@22 -- # local timeout=20 00:24:48.963 16:01:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:48.964 16:01:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:48.964 16:01:51 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:48.964 16:01:51 -- target/multipath.sh@85 -- # echo numa 00:24:48.964 16:01:51 -- target/multipath.sh@88 -- # fio_pid=62109 00:24:48.964 16:01:51 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:24:48.964 16:01:51 -- target/multipath.sh@90 -- # sleep 1 00:24:48.964 [global] 00:24:48.964 thread=1 00:24:48.964 invalidate=1 00:24:48.964 rw=randrw 00:24:48.964 time_based=1 00:24:48.964 runtime=6 00:24:48.964 ioengine=libaio 00:24:48.964 direct=1 00:24:48.964 bs=4096 00:24:48.964 iodepth=128 00:24:48.964 norandommap=0 00:24:48.964 numjobs=1 00:24:48.964 00:24:48.964 verify_dump=1 00:24:48.964 verify_backlog=512 00:24:48.964 verify_state_save=0 00:24:48.964 do_verify=1 00:24:48.964 verify=crc32c-intel 00:24:48.964 [job0] 00:24:48.964 filename=/dev/nvme0n1 00:24:48.964 Could not set queue depth (nvme0n1) 00:24:49.222 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:49.222 fio-3.35 00:24:49.222 Starting 1 thread 00:24:50.156 16:01:52 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:50.413 16:01:53 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:50.672 16:01:53 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:24:50.672 16:01:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:24:50.672 16:01:53 -- target/multipath.sh@22 -- # local timeout=20 00:24:50.672 16:01:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:50.672 16:01:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:50.672 16:01:53 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:50.672 16:01:53 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:24:50.672 16:01:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:24:50.672 16:01:53 -- target/multipath.sh@22 -- # local timeout=20 00:24:50.672 16:01:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:50.672 16:01:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:50.672 16:01:53 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:50.672 16:01:53 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:50.930 16:01:53 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:51.495 16:01:54 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:24:51.495 16:01:54 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:24:51.495 16:01:54 -- target/multipath.sh@22 -- # local timeout=20 00:24:51.495 16:01:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:51.495 16:01:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:51.495 16:01:54 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:51.495 16:01:54 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:24:51.495 16:01:54 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:24:51.495 16:01:54 -- target/multipath.sh@22 -- # local timeout=20 00:24:51.495 16:01:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:51.495 16:01:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:51.495 16:01:54 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:51.495 16:01:54 -- target/multipath.sh@104 -- # wait 62109 00:24:55.705 00:24:55.705 job0: (groupid=0, jobs=1): err= 0: pid=62136: Mon Jul 22 16:01:58 2024 00:24:55.705 read: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(244MiB/6006msec) 00:24:55.705 slat (usec): min=5, max=5741, avg=54.99, stdev=217.62 00:24:55.705 clat (usec): min=1447, max=17522, avg=8362.79, stdev=1568.06 00:24:55.705 lat (usec): min=1472, max=17531, avg=8417.78, stdev=1573.47 00:24:55.705 clat percentiles (usec): 00:24:55.705 | 1.00th=[ 4424], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 7504], 00:24:55.705 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8455], 00:24:55.705 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[10159], 95.00th=[11731], 00:24:55.705 | 99.00th=[13435], 99.50th=[14222], 99.90th=[15533], 99.95th=[15795], 00:24:55.706 | 99.99th=[16581] 00:24:55.706 bw ( KiB/s): min= 9816, max=28232, per=51.79%, avg=21579.91, stdev=6159.49, samples=11 00:24:55.706 iops : min= 2454, max= 7058, avg=5394.91, stdev=1539.89, samples=11 00:24:55.706 write: IOPS=6116, BW=23.9MiB/s (25.1MB/s)(127MiB/5334msec); 0 zone resets 00:24:55.706 slat (usec): min=10, max=3586, avg=66.32, stdev=141.77 00:24:55.706 clat (usec): min=695, max=16603, avg=7311.32, stdev=1409.60 00:24:55.706 lat (usec): min=730, max=16627, avg=7377.63, stdev=1415.46 00:24:55.706 clat percentiles (usec): 00:24:55.706 | 1.00th=[ 3621], 5.00th=[ 4424], 10.00th=[ 5407], 20.00th=[ 6652], 00:24:55.706 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 7635], 00:24:55.706 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 9241], 00:24:55.706 | 99.00th=[11469], 99.50th=[12518], 99.90th=[14353], 99.95th=[15270], 00:24:55.706 | 99.99th=[16581] 00:24:55.706 bw ( KiB/s): min=10019, max=27648, per=88.43%, avg=21637.18, stdev=6009.74, samples=11 00:24:55.706 iops : min= 2504, max= 6912, avg=5409.18, stdev=1502.59, samples=11 00:24:55.706 lat (usec) : 750=0.01%, 1000=0.01% 00:24:55.706 lat (msec) : 2=0.02%, 4=1.20%, 10=90.83%, 20=7.95% 00:24:55.706 cpu : usr=5.93%, sys=26.66%, ctx=5525, majf=0, minf=96 00:24:55.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:24:55.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:55.706 issued rwts: total=62557,32628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:55.706 00:24:55.706 Run status group 0 (all jobs): 00:24:55.706 READ: bw=40.7MiB/s (42.7MB/s), 40.7MiB/s-40.7MiB/s (42.7MB/s-42.7MB/s), io=244MiB (256MB), run=6006-6006msec 00:24:55.706 WRITE: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=127MiB (134MB), run=5334-5334msec 00:24:55.706 00:24:55.706 Disk stats (read/write): 00:24:55.706 nvme0n1: ios=61646/31986, merge=0/0, ticks=490565/216653, in_queue=707218, util=98.68% 00:24:55.706 16:01:58 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:55.706 16:01:58 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:24:56.272 16:01:58 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:24:56.272 16:01:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:24:56.272 16:01:58 -- target/multipath.sh@22 -- # local timeout=20 00:24:56.272 16:01:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:56.272 16:01:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:56.272 16:01:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:56.272 16:01:58 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:24:56.272 16:01:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:24:56.272 16:01:58 -- target/multipath.sh@22 -- # local timeout=20 00:24:56.272 16:01:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:56.272 16:01:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:56.272 16:01:58 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:56.272 16:01:58 -- target/multipath.sh@113 -- # echo round-robin 00:24:56.272 16:01:58 -- target/multipath.sh@116 -- # fio_pid=62217 00:24:56.272 16:01:58 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:24:56.272 16:01:58 -- target/multipath.sh@118 -- # sleep 1 00:24:56.272 [global] 00:24:56.272 thread=1 00:24:56.272 invalidate=1 00:24:56.272 rw=randrw 00:24:56.272 time_based=1 00:24:56.272 runtime=6 00:24:56.272 ioengine=libaio 00:24:56.272 direct=1 00:24:56.272 bs=4096 00:24:56.272 iodepth=128 00:24:56.272 norandommap=0 00:24:56.272 numjobs=1 00:24:56.272 00:24:56.272 verify_dump=1 00:24:56.272 verify_backlog=512 00:24:56.272 verify_state_save=0 00:24:56.272 do_verify=1 00:24:56.272 verify=crc32c-intel 00:24:56.272 [job0] 00:24:56.272 filename=/dev/nvme0n1 00:24:56.272 Could not set queue depth (nvme0n1) 00:24:56.272 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:56.272 fio-3.35 00:24:56.272 Starting 1 thread 00:24:57.205 16:01:59 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:57.463 16:02:00 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:57.721 16:02:00 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:24:57.721 16:02:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:24:57.721 16:02:00 -- target/multipath.sh@22 -- # local timeout=20 00:24:57.721 16:02:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:57.721 16:02:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:57.721 16:02:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:57.721 16:02:00 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:24:57.721 16:02:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:24:57.721 16:02:00 -- target/multipath.sh@22 -- # local timeout=20 00:24:57.721 16:02:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:57.721 16:02:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:57.721 16:02:00 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:57.721 16:02:00 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:57.979 16:02:00 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:58.546 16:02:01 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:24:58.546 16:02:01 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:24:58.546 16:02:01 -- target/multipath.sh@22 -- # local timeout=20 00:24:58.546 16:02:01 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:58.546 16:02:01 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:58.546 16:02:01 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:58.546 16:02:01 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:24:58.546 16:02:01 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:24:58.546 16:02:01 -- target/multipath.sh@22 -- # local timeout=20 00:24:58.546 16:02:01 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:58.546 16:02:01 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:58.546 16:02:01 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:58.546 16:02:01 -- target/multipath.sh@132 -- # wait 62217 00:25:02.732 00:25:02.732 job0: (groupid=0, jobs=1): err= 0: pid=62239: Mon Jul 22 16:02:05 2024 00:25:02.732 read: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(266MiB/6006msec) 00:25:02.732 slat (usec): min=2, max=5878, avg=42.04, stdev=188.22 00:25:02.732 clat (usec): min=218, max=25701, avg=7600.28, stdev=2668.80 00:25:02.732 lat (usec): min=254, max=25716, avg=7642.32, stdev=2676.59 00:25:02.732 clat percentiles (usec): 00:25:02.732 | 1.00th=[ 922], 5.00th=[ 2073], 10.00th=[ 3490], 20.00th=[ 6259], 00:25:02.732 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8094], 00:25:02.732 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[10814], 95.00th=[11863], 00:25:02.732 | 99.00th=[14615], 99.50th=[15795], 99.90th=[17433], 99.95th=[18220], 00:25:02.732 | 99.99th=[23725] 00:25:02.732 bw ( KiB/s): min=10368, max=34520, per=54.29%, avg=24583.64, stdev=7869.16, samples=11 00:25:02.732 iops : min= 2592, max= 8630, avg=6145.91, stdev=1967.29, samples=11 00:25:02.732 write: IOPS=6936, BW=27.1MiB/s (28.4MB/s)(145MiB/5368msec); 0 zone resets 00:25:02.732 slat (usec): min=3, max=2301, avg=57.28, stdev=124.36 00:25:02.732 clat (usec): min=275, max=22754, avg=6600.16, stdev=2084.09 00:25:02.732 lat (usec): min=324, max=22789, avg=6657.44, stdev=2091.19 00:25:02.732 clat percentiles (usec): 00:25:02.732 | 1.00th=[ 889], 5.00th=[ 2180], 10.00th=[ 3654], 20.00th=[ 5145], 00:25:02.732 | 30.00th=[ 6390], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7242], 00:25:02.732 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8291], 95.00th=[ 9503], 00:25:02.732 | 99.00th=[11600], 99.50th=[12911], 99.90th=[15664], 99.95th=[16712], 00:25:02.732 | 99.99th=[22152] 00:25:02.732 bw ( KiB/s): min=10848, max=34712, per=88.76%, avg=24628.18, stdev=7676.39, samples=11 00:25:02.732 iops : min= 2712, max= 8678, avg=6157.00, stdev=1919.09, samples=11 00:25:02.732 lat (usec) : 250=0.01%, 500=0.15%, 750=0.38%, 1000=0.78% 00:25:02.732 lat (msec) : 2=3.25%, 4=7.18%, 10=78.13%, 20=10.10%, 50=0.03% 00:25:02.732 cpu : usr=6.46%, sys=27.19%, ctx=6845, majf=0, minf=116 00:25:02.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:25:02.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:02.732 issued rwts: total=67985,37236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:02.732 00:25:02.732 Run status group 0 (all jobs): 00:25:02.732 READ: bw=44.2MiB/s (46.4MB/s), 44.2MiB/s-44.2MiB/s (46.4MB/s-46.4MB/s), io=266MiB (278MB), run=6006-6006msec 00:25:02.732 WRITE: bw=27.1MiB/s (28.4MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=145MiB (153MB), run=5368-5368msec 00:25:02.732 00:25:02.732 Disk stats (read/write): 00:25:02.732 nvme0n1: ios=67173/36352, merge=0/0, ticks=485172/221404, in_queue=706576, util=98.61% 00:25:02.732 16:02:05 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:02.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:25:02.732 16:02:05 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:02.732 16:02:05 -- common/autotest_common.sh@1198 -- # local i=0 00:25:02.732 16:02:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:02.732 16:02:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:02.732 16:02:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:02.732 16:02:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:02.732 16:02:05 -- common/autotest_common.sh@1210 -- # return 0 00:25:02.732 16:02:05 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.732 16:02:05 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:25:02.732 16:02:05 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:25:02.732 16:02:05 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:25:02.732 16:02:05 -- target/multipath.sh@144 -- # nvmftestfini 00:25:02.732 16:02:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:02.732 16:02:05 -- nvmf/common.sh@116 -- # sync 00:25:02.732 16:02:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:02.732 16:02:05 -- nvmf/common.sh@119 -- # set +e 00:25:02.732 16:02:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:02.732 16:02:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:02.732 rmmod nvme_tcp 00:25:02.732 rmmod nvme_fabrics 00:25:02.732 rmmod nvme_keyring 00:25:02.991 16:02:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:02.991 16:02:05 -- nvmf/common.sh@123 -- # set -e 00:25:02.991 16:02:05 -- nvmf/common.sh@124 -- # return 0 00:25:02.991 16:02:05 -- nvmf/common.sh@477 -- # '[' -n 62027 ']' 00:25:02.991 16:02:05 -- nvmf/common.sh@478 -- # killprocess 62027 00:25:02.991 16:02:05 -- common/autotest_common.sh@926 -- # '[' -z 62027 ']' 00:25:02.991 16:02:05 -- common/autotest_common.sh@930 -- # kill -0 62027 00:25:02.991 16:02:05 -- common/autotest_common.sh@931 -- # uname 00:25:02.991 16:02:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:02.991 16:02:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62027 00:25:02.991 killing process with pid 62027 00:25:02.991 16:02:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:02.991 16:02:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:02.991 16:02:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62027' 00:25:02.991 16:02:05 -- common/autotest_common.sh@945 -- # kill 62027 00:25:02.991 16:02:05 -- common/autotest_common.sh@950 -- # wait 62027 00:25:02.991 16:02:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:02.991 16:02:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:02.991 16:02:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:02.991 16:02:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:02.991 16:02:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:02.991 16:02:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.991 16:02:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.991 16:02:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.250 16:02:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:03.250 ************************************ 00:25:03.250 END TEST nvmf_multipath 00:25:03.250 ************************************ 00:25:03.250 00:25:03.250 real 0m18.988s 00:25:03.250 user 1m11.600s 00:25:03.250 sys 0m10.605s 00:25:03.250 16:02:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.250 16:02:05 -- common/autotest_common.sh@10 -- # set +x 00:25:03.250 16:02:05 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:25:03.250 16:02:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:03.250 16:02:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.250 16:02:05 -- common/autotest_common.sh@10 -- # set +x 00:25:03.250 ************************************ 00:25:03.250 START TEST nvmf_zcopy 00:25:03.250 ************************************ 00:25:03.250 16:02:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:25:03.250 * Looking for test storage... 00:25:03.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:03.250 16:02:05 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:03.250 16:02:05 -- nvmf/common.sh@7 -- # uname -s 00:25:03.250 16:02:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.250 16:02:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.250 16:02:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.250 16:02:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.250 16:02:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.250 16:02:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.250 16:02:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.250 16:02:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.250 16:02:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.250 16:02:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.250 16:02:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:03.250 16:02:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:03.250 16:02:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.250 16:02:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.250 16:02:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:03.250 16:02:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.250 16:02:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.250 16:02:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.250 16:02:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.250 16:02:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.250 16:02:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.250 16:02:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.250 16:02:05 -- paths/export.sh@5 -- # export PATH 00:25:03.250 16:02:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.250 16:02:05 -- nvmf/common.sh@46 -- # : 0 00:25:03.250 16:02:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:03.250 16:02:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:03.250 16:02:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:03.250 16:02:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.250 16:02:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.250 16:02:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:03.250 16:02:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:03.250 16:02:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:03.250 16:02:06 -- target/zcopy.sh@12 -- # nvmftestinit 00:25:03.250 16:02:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:03.250 16:02:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.250 16:02:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:03.250 16:02:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:03.250 16:02:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:03.250 16:02:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.250 16:02:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.250 16:02:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.250 16:02:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:03.250 16:02:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:03.250 16:02:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:03.250 16:02:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:03.250 16:02:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:03.250 16:02:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:03.250 16:02:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.250 16:02:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.251 16:02:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:03.251 16:02:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:03.251 16:02:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:03.251 16:02:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:03.251 16:02:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:03.251 16:02:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.251 16:02:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:03.251 16:02:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:03.251 16:02:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:03.251 16:02:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:03.251 16:02:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:03.251 16:02:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:03.251 Cannot find device "nvmf_tgt_br" 00:25:03.251 16:02:06 -- nvmf/common.sh@154 -- # true 00:25:03.251 16:02:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.251 Cannot find device "nvmf_tgt_br2" 00:25:03.251 16:02:06 -- nvmf/common.sh@155 -- # true 00:25:03.251 16:02:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:03.251 16:02:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:03.251 Cannot find device "nvmf_tgt_br" 00:25:03.251 16:02:06 -- nvmf/common.sh@157 -- # true 00:25:03.251 16:02:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:03.251 Cannot find device "nvmf_tgt_br2" 00:25:03.251 16:02:06 -- nvmf/common.sh@158 -- # true 00:25:03.251 16:02:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:03.509 16:02:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:03.509 16:02:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:03.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.509 16:02:06 -- nvmf/common.sh@161 -- # true 00:25:03.509 16:02:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:03.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.509 16:02:06 -- nvmf/common.sh@162 -- # true 00:25:03.509 16:02:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:03.509 16:02:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:03.509 16:02:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:03.509 16:02:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:03.509 16:02:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:03.509 16:02:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:03.509 16:02:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:03.509 16:02:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:03.509 16:02:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:03.509 16:02:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:03.509 16:02:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:03.509 16:02:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:03.509 16:02:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:03.509 16:02:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:03.509 16:02:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:03.509 16:02:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:03.509 16:02:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:03.509 16:02:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:03.509 16:02:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:03.509 16:02:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:03.509 16:02:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:03.510 16:02:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:03.510 16:02:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:03.510 16:02:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:03.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:25:03.510 00:25:03.510 --- 10.0.0.2 ping statistics --- 00:25:03.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.510 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:03.510 16:02:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:03.510 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:03.510 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:25:03.510 00:25:03.510 --- 10.0.0.3 ping statistics --- 00:25:03.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.510 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:03.510 16:02:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:03.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:25:03.768 00:25:03.768 --- 10.0.0.1 ping statistics --- 00:25:03.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.768 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:25:03.768 16:02:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.768 16:02:06 -- nvmf/common.sh@421 -- # return 0 00:25:03.768 16:02:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:03.768 16:02:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.768 16:02:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:03.768 16:02:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:03.768 16:02:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.768 16:02:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:03.768 16:02:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:03.768 16:02:06 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:25:03.768 16:02:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:03.768 16:02:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:03.768 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:25:03.768 16:02:06 -- nvmf/common.sh@469 -- # nvmfpid=62491 00:25:03.768 16:02:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:03.768 16:02:06 -- nvmf/common.sh@470 -- # waitforlisten 62491 00:25:03.768 16:02:06 -- common/autotest_common.sh@819 -- # '[' -z 62491 ']' 00:25:03.768 16:02:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.768 16:02:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.768 16:02:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.768 16:02:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.768 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:25:03.768 [2024-07-22 16:02:06.462383] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:03.768 [2024-07-22 16:02:06.462706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.768 [2024-07-22 16:02:06.606330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.073 [2024-07-22 16:02:06.675516] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:04.073 [2024-07-22 16:02:06.675685] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.073 [2024-07-22 16:02:06.675703] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.073 [2024-07-22 16:02:06.675714] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.073 [2024-07-22 16:02:06.675750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.640 16:02:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:04.640 16:02:07 -- common/autotest_common.sh@852 -- # return 0 00:25:04.640 16:02:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:04.640 16:02:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:04.640 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.640 16:02:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.640 16:02:07 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:25:04.640 16:02:07 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:25:04.640 16:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.640 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.640 [2024-07-22 16:02:07.430539] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.640 16:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.640 16:02:07 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:04.640 16:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.640 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.640 16:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.640 16:02:07 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.640 16:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.640 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.640 [2024-07-22 16:02:07.446639] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.640 16:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.640 16:02:07 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:04.640 16:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.640 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.640 16:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.640 16:02:07 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:25:04.641 16:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.641 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.641 malloc0 00:25:04.641 16:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.641 16:02:07 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:04.641 16:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.641 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.641 16:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.641 16:02:07 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:25:04.641 16:02:07 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:25:04.641 16:02:07 -- nvmf/common.sh@520 -- # config=() 00:25:04.641 16:02:07 -- nvmf/common.sh@520 -- # local subsystem config 00:25:04.641 16:02:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:04.641 16:02:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:04.641 { 00:25:04.641 "params": { 00:25:04.641 "name": "Nvme$subsystem", 00:25:04.641 "trtype": "$TEST_TRANSPORT", 00:25:04.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.641 "adrfam": "ipv4", 00:25:04.641 "trsvcid": "$NVMF_PORT", 00:25:04.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.641 "hdgst": ${hdgst:-false}, 00:25:04.641 "ddgst": ${ddgst:-false} 00:25:04.641 }, 00:25:04.641 "method": "bdev_nvme_attach_controller" 00:25:04.641 } 00:25:04.641 EOF 00:25:04.641 )") 00:25:04.641 16:02:07 -- nvmf/common.sh@542 -- # cat 00:25:04.641 16:02:07 -- nvmf/common.sh@544 -- # jq . 00:25:04.641 16:02:07 -- nvmf/common.sh@545 -- # IFS=, 00:25:04.641 16:02:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:04.641 "params": { 00:25:04.641 "name": "Nvme1", 00:25:04.641 "trtype": "tcp", 00:25:04.641 "traddr": "10.0.0.2", 00:25:04.641 "adrfam": "ipv4", 00:25:04.641 "trsvcid": "4420", 00:25:04.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.641 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.641 "hdgst": false, 00:25:04.641 "ddgst": false 00:25:04.641 }, 00:25:04.641 "method": "bdev_nvme_attach_controller" 00:25:04.641 }' 00:25:04.900 [2024-07-22 16:02:07.530056] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:04.900 [2024-07-22 16:02:07.530185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62523 ] 00:25:04.900 [2024-07-22 16:02:07.669540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.900 [2024-07-22 16:02:07.739804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.158 Running I/O for 10 seconds... 00:25:15.128 00:25:15.128 Latency(us) 00:25:15.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.128 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:25:15.128 Verification LBA range: start 0x0 length 0x1000 00:25:15.128 Nvme1n1 : 10.01 8499.61 66.40 0.00 0.00 15019.90 1392.64 25618.62 00:25:15.128 =================================================================================================================== 00:25:15.128 Total : 8499.61 66.40 0.00 0.00 15019.90 1392.64 25618.62 00:25:15.387 16:02:18 -- target/zcopy.sh@39 -- # perfpid=62635 00:25:15.387 16:02:18 -- target/zcopy.sh@41 -- # xtrace_disable 00:25:15.387 16:02:18 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:25:15.387 16:02:18 -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 16:02:18 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:25:15.387 16:02:18 -- nvmf/common.sh@520 -- # config=() 00:25:15.387 16:02:18 -- nvmf/common.sh@520 -- # local subsystem config 00:25:15.387 16:02:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.387 16:02:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.387 { 00:25:15.387 "params": { 00:25:15.387 "name": "Nvme$subsystem", 00:25:15.387 "trtype": "$TEST_TRANSPORT", 00:25:15.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.387 "adrfam": "ipv4", 00:25:15.387 "trsvcid": "$NVMF_PORT", 00:25:15.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.387 "hdgst": ${hdgst:-false}, 00:25:15.387 "ddgst": ${ddgst:-false} 00:25:15.387 }, 00:25:15.387 "method": "bdev_nvme_attach_controller" 00:25:15.387 } 00:25:15.387 EOF 00:25:15.387 )") 00:25:15.387 16:02:18 -- nvmf/common.sh@542 -- # cat 00:25:15.387 [2024-07-22 16:02:18.094852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.095029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 16:02:18 -- nvmf/common.sh@544 -- # jq . 00:25:15.387 16:02:18 -- nvmf/common.sh@545 -- # IFS=, 00:25:15.387 16:02:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:15.387 "params": { 00:25:15.387 "name": "Nvme1", 00:25:15.387 "trtype": "tcp", 00:25:15.387 "traddr": "10.0.0.2", 00:25:15.387 "adrfam": "ipv4", 00:25:15.387 "trsvcid": "4420", 00:25:15.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.387 "hdgst": false, 00:25:15.387 "ddgst": false 00:25:15.387 }, 00:25:15.387 "method": "bdev_nvme_attach_controller" 00:25:15.387 }' 00:25:15.387 [2024-07-22 16:02:18.106827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.106982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.118834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.118987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.130842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.131003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.131437] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:15.387 [2024-07-22 16:02:18.131661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62635 ] 00:25:15.387 [2024-07-22 16:02:18.142832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.142986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.150846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.151007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.158841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.158991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.166841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.166985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.174855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.175003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.182848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.182991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.194846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.194998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.206852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.207001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.218862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.219014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.226858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.226999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.387 [2024-07-22 16:02:18.238862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.387 [2024-07-22 16:02:18.239003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.250878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.251040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.262871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.263018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.265093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.645 [2024-07-22 16:02:18.274904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.275125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.286897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.287072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.294884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.295033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.306912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.307085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.318920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.319133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.326902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.327049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.334896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.335044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.341134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.645 [2024-07-22 16:02:18.346901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.347044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.358941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.359163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.370943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.371193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.382954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.383186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.394953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.395199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.406933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.407133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.419028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.419178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.431020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.431165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.443047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.645 [2024-07-22 16:02:18.443190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.645 [2024-07-22 16:02:18.451047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.646 [2024-07-22 16:02:18.451187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.646 [2024-07-22 16:02:18.463052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.646 [2024-07-22 16:02:18.463197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.646 [2024-07-22 16:02:18.471054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.646 [2024-07-22 16:02:18.471191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.646 [2024-07-22 16:02:18.479062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.646 [2024-07-22 16:02:18.479209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.646 Running I/O for 5 seconds... 00:25:15.646 [2024-07-22 16:02:18.491069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.646 [2024-07-22 16:02:18.491214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.646 [2024-07-22 16:02:18.505209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.646 [2024-07-22 16:02:18.505371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.903 [2024-07-22 16:02:18.515680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.903 [2024-07-22 16:02:18.515835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.903 [2024-07-22 16:02:18.526897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.903 [2024-07-22 16:02:18.527058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.903 [2024-07-22 16:02:18.542682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.903 [2024-07-22 16:02:18.542842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.903 [2024-07-22 16:02:18.559786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.903 [2024-07-22 16:02:18.559828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.903 [2024-07-22 16:02:18.576938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.903 [2024-07-22 16:02:18.576980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.903 [2024-07-22 16:02:18.592289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.903 [2024-07-22 16:02:18.592332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.903 [2024-07-22 16:02:18.611204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.903 [2024-07-22 16:02:18.611249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.903 [2024-07-22 16:02:18.625581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.903 [2024-07-22 16:02:18.625623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.903 [2024-07-22 16:02:18.641551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.904 [2024-07-22 16:02:18.641593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.904 [2024-07-22 16:02:18.660383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.904 [2024-07-22 16:02:18.660425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.904 [2024-07-22 16:02:18.675535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.904 [2024-07-22 16:02:18.675576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.904 [2024-07-22 16:02:18.693665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.904 [2024-07-22 16:02:18.693704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.904 [2024-07-22 16:02:18.708313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.904 [2024-07-22 16:02:18.708357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.904 [2024-07-22 16:02:18.723722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.904 [2024-07-22 16:02:18.723773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.904 [2024-07-22 16:02:18.733825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.904 [2024-07-22 16:02:18.733864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.904 [2024-07-22 16:02:18.750329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:15.904 [2024-07-22 16:02:18.750374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.766947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.766998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.783530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.783569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.800102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.800144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.809589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.809627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.825108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.825150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.835365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.835403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.849732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.849773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.868921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.868961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.883403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.883446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.893288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.893333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.908895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.908942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.925281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.925336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.942395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.942451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.958924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.958982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.975873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.975919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:18.992456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:18.992528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:19.008281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:19.008337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.162 [2024-07-22 16:02:19.018087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.162 [2024-07-22 16:02:19.018135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.033131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.033189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.044375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.044429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.059571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.059626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.075621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.075673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.092139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.092206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.104792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.104862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.124735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.124808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.138907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.138981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.156651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.156725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.170916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.171001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.185744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.185807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.201008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.201058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.218409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.218460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.233235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.233288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.242734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.242781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.258718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.258764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.420 [2024-07-22 16:02:19.268139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.420 [2024-07-22 16:02:19.268183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.678 [2024-07-22 16:02:19.284057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.678 [2024-07-22 16:02:19.284107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.678 [2024-07-22 16:02:19.294169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.678 [2024-07-22 16:02:19.294211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.678 [2024-07-22 16:02:19.305238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.678 [2024-07-22 16:02:19.305280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.678 [2024-07-22 16:02:19.318094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.678 [2024-07-22 16:02:19.318136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.678 [2024-07-22 16:02:19.335239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.678 [2024-07-22 16:02:19.335287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.678 [2024-07-22 16:02:19.345315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.678 [2024-07-22 16:02:19.345357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.678 [2024-07-22 16:02:19.359798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.678 [2024-07-22 16:02:19.359858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.678 [2024-07-22 16:02:19.376692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.678 [2024-07-22 16:02:19.376745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.678 [2024-07-22 16:02:19.393587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.678 [2024-07-22 16:02:19.393637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.679 [2024-07-22 16:02:19.411156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.679 [2024-07-22 16:02:19.411216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.679 [2024-07-22 16:02:19.425951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.679 [2024-07-22 16:02:19.426016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.679 [2024-07-22 16:02:19.435130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.679 [2024-07-22 16:02:19.435178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.679 [2024-07-22 16:02:19.451477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.679 [2024-07-22 16:02:19.451538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.679 [2024-07-22 16:02:19.470747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.679 [2024-07-22 16:02:19.470800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.679 [2024-07-22 16:02:19.485615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.679 [2024-07-22 16:02:19.485660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.679 [2024-07-22 16:02:19.504828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.679 [2024-07-22 16:02:19.504878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.679 [2024-07-22 16:02:19.519265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.679 [2024-07-22 16:02:19.519312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.679 [2024-07-22 16:02:19.536277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.679 [2024-07-22 16:02:19.536327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.937 [2024-07-22 16:02:19.550713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.937 [2024-07-22 16:02:19.550762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.937 [2024-07-22 16:02:19.559657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.937 [2024-07-22 16:02:19.559700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.937 [2024-07-22 16:02:19.574900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.937 [2024-07-22 16:02:19.574945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.937 [2024-07-22 16:02:19.594259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.937 [2024-07-22 16:02:19.594311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.937 [2024-07-22 16:02:19.608600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.937 [2024-07-22 16:02:19.608646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.937 [2024-07-22 16:02:19.624536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.937 [2024-07-22 16:02:19.624602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.937 [2024-07-22 16:02:19.634558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.937 [2024-07-22 16:02:19.634606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.937 [2024-07-22 16:02:19.646094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.937 [2024-07-22 16:02:19.646142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.937 [2024-07-22 16:02:19.656711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.656756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.669728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.669783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.679183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.679229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.690666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.690716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.702398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.702446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.720498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.720551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.731117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.731162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.746202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.746258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.761005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.761058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.776890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.776942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:16.938 [2024-07-22 16:02:19.794517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:16.938 [2024-07-22 16:02:19.794571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.804777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.804828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.819472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.819549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.836551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.836606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.854245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.854293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.869340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.869383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.879466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.879533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.894651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.894690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.904761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.904801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.919330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.919372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.936454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.936520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.953173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.953216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.969829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.969872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.985610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.985651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:19.995010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:19.995048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:20.010719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:20.010761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:20.026283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:20.026328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:20.035870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:20.035912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.196 [2024-07-22 16:02:20.052071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.196 [2024-07-22 16:02:20.052112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.475 [2024-07-22 16:02:20.071528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.475 [2024-07-22 16:02:20.071569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.475 [2024-07-22 16:02:20.086389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.475 [2024-07-22 16:02:20.086429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.475 [2024-07-22 16:02:20.105298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.475 [2024-07-22 16:02:20.105339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.475 [2024-07-22 16:02:20.120134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.475 [2024-07-22 16:02:20.120178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.475 [2024-07-22 16:02:20.132055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.475 [2024-07-22 16:02:20.132097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.475 [2024-07-22 16:02:20.149792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.475 [2024-07-22 16:02:20.149837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.164095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.164149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.173958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.174006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.185740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.185787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.196784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.196824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.209600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.209646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.225982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.226031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.242198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.242249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.259015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.259063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.268216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.268257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.283905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.283952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.293710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.293751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.304917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.304961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.476 [2024-07-22 16:02:20.315690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.476 [2024-07-22 16:02:20.315745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.326534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.326580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.338678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.338737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.352631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.352693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.369724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.369789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.383098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.383146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.396064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.396117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.408108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.408149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.424458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.424519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.434334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.434382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.450386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.450435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.468128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.468173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.477807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.477846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.492283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.492324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.509904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.509945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.525525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.525566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.534849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.534887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.551750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.551791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.568132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.568174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.584078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.584120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:17.751 [2024-07-22 16:02:20.601978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:17.751 [2024-07-22 16:02:20.602020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.616724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.616763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.632434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.632473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.649630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.649669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.659796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.659837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.671727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.671766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.686896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.686939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.703430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.703525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.715998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.716049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.734567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.734619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.750829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.750879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.766323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.766373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.781415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.781466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.798743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.798783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.813521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.813558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.822968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.823015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.839436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.839474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.009 [2024-07-22 16:02:20.856600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.009 [2024-07-22 16:02:20.856638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:20.872798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:20.872837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:20.891339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:20.891380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:20.905962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:20.906002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:20.923482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:20.923532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:20.938057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:20.938097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:20.953288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:20.953327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:20.972255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:20.972298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:20.982338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:20.982377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:20.993153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:20.993191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:21.010360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:21.010402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:21.026762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:21.026804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:21.044522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:21.044562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:21.060533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:21.060575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:21.076439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:21.076480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:21.095714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:21.095755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:21.110227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:21.110267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.268 [2024-07-22 16:02:21.120044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.268 [2024-07-22 16:02:21.120082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.133414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.133455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.148224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.148270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.163754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.163800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.181645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.181686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.196303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.196348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.212107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.212153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.229408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.229450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.244225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.244265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.253525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.253572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.269708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.269761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.285795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.285838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.294828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.294869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.311192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.311236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.330088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.330130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.344756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.344798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.354073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.354109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.367221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.367261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.527 [2024-07-22 16:02:21.383104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.527 [2024-07-22 16:02:21.383143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.400397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.400442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.416404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.416445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.435376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.435418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.450726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.450764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.469613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.469651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.484319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.484358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.493534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.493573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.509687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.509729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.527164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.527211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.544302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.544354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.559098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.559138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.568298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.568335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.583089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.583131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.600801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.600851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.615408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.615459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.631126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.631173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:18.786 [2024-07-22 16:02:21.641662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:18.786 [2024-07-22 16:02:21.641705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.656032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.656079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.665619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.665661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.678840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.678883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.694992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.695034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.712023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.712067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.730698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.730754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.741892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.741943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.753968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.754015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.765975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.766038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.778151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.778205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.790449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.790521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.803005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.803059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.815088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.815139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.827117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.827166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.839547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.839594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.851403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.851458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.863528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.863584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.875596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.875647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.891861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.891922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.045 [2024-07-22 16:02:21.903428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.045 [2024-07-22 16:02:21.903506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:21.916151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:21.916216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:21.927935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:21.927989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:21.940217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:21.940272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:21.952467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:21.952541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:21.964589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:21.964639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:21.976728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:21.976776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:21.988609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:21.988653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.000639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.000684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.012676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.012723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.024598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.024643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.036797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.036843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.048731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.048782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.060989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.061038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.071220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.071271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.082794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.082842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.094875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.094925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.106903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.106951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.118871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.118922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.133895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.133946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.143554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.143597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.304 [2024-07-22 16:02:22.154738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.304 [2024-07-22 16:02:22.154781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.170671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.170726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.189092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.189160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.204561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.204611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.222458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.222520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.236749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.236801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.245732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.245775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.257290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.257337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.273107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.273164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.290110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.290163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.305759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.305805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.322827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.322877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.338707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.338754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.357852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.357900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.372611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.372655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.382121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.382163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.397545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.397599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.563 [2024-07-22 16:02:22.412965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.563 [2024-07-22 16:02:22.413019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.430290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.430347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.440580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.440628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.451123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.451167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.462350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.462399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.477512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.477566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.494132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.494187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.504131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.504182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.515808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.515856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.526989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.527040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.544760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.544816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.561406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.561461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.577710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.577762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.587544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.587589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.598887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.598933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.610269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.610320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.624950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.625002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.642382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.642439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.651868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.651916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.664845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.664900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:19.821 [2024-07-22 16:02:22.676856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:19.821 [2024-07-22 16:02:22.676915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.693167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.693230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.708438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.708503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.718687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.718736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.731004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.731059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.742800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.742853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.754989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.755044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.767437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.767508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.779650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.779703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.790087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.790138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.801953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.802005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.817644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.817701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.833925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.833981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.844741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.844792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.857877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.857934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.872959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.873021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.888525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.888587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.898683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.898734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.911892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.911947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.923684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.923738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.080 [2024-07-22 16:02:22.935911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.080 [2024-07-22 16:02:22.935968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:22.950784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:22.950845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:22.962231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:22.962285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:22.975162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:22.975217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:22.987332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:22.987383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.003386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.003450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.019414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.019470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.029333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.029390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.042180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.042238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.053826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.053885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.065541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.065597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.078459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.078522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.088380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.088425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.100057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.100107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.110427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.110473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.121141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.121190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.136986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.137047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.153181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.153251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.169410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.169473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.337 [2024-07-22 16:02:23.187388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.337 [2024-07-22 16:02:23.187451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.202259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.202318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.211445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.211506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.228056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.228118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.238087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.238138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.249760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.249811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.260865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.260922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.276604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.276661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.292098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.292158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.310259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.310315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.320835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.320903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.331533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.331581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.346010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.346065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.355399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.355456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.594 [2024-07-22 16:02:23.371147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.594 [2024-07-22 16:02:23.371207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.595 [2024-07-22 16:02:23.380944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.595 [2024-07-22 16:02:23.380989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.595 [2024-07-22 16:02:23.395465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.595 [2024-07-22 16:02:23.395535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.595 [2024-07-22 16:02:23.406174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.595 [2024-07-22 16:02:23.406222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.595 [2024-07-22 16:02:23.417790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.595 [2024-07-22 16:02:23.417838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.595 [2024-07-22 16:02:23.428668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.595 [2024-07-22 16:02:23.428717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.595 [2024-07-22 16:02:23.442893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.595 [2024-07-22 16:02:23.442950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.595 [2024-07-22 16:02:23.452739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.595 [2024-07-22 16:02:23.452785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.463990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.464039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.482020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.482074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.495532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.495582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 00:25:20.853 Latency(us) 00:25:20.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.853 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:25:20.853 Nvme1n1 : 5.01 11331.66 88.53 0.00 0.00 11279.77 4527.94 24546.21 00:25:20.853 =================================================================================================================== 00:25:20.853 Total : 11331.66 88.53 0.00 0.00 11279.77 4527.94 24546.21 00:25:20.853 [2024-07-22 16:02:23.501062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.501097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.509062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.509105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.521097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.521155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.529086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.529137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.537080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.537131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.545089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.545152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.553085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.553138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.561098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.561151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.573094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.573150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.581091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.581135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.589081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.589123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.597089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.597131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.605085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.605125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.613100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.613149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.621104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.621152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.629093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.629133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.853 [2024-07-22 16:02:23.641100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.853 [2024-07-22 16:02:23.641145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.854 [2024-07-22 16:02:23.649097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.854 [2024-07-22 16:02:23.649136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.854 [2024-07-22 16:02:23.657120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.854 [2024-07-22 16:02:23.657167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.854 [2024-07-22 16:02:23.665120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.854 [2024-07-22 16:02:23.665167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.854 [2024-07-22 16:02:23.673105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.854 [2024-07-22 16:02:23.673142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.854 [2024-07-22 16:02:23.681109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:20.854 [2024-07-22 16:02:23.681147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:20.854 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (62635) - No such process 00:25:20.854 16:02:23 -- target/zcopy.sh@49 -- # wait 62635 00:25:20.854 16:02:23 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:20.854 16:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.854 16:02:23 -- common/autotest_common.sh@10 -- # set +x 00:25:20.854 16:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.854 16:02:23 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:20.854 16:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.854 16:02:23 -- common/autotest_common.sh@10 -- # set +x 00:25:20.854 delay0 00:25:20.854 16:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.854 16:02:23 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:25:20.854 16:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.854 16:02:23 -- common/autotest_common.sh@10 -- # set +x 00:25:21.112 16:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.112 16:02:23 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:25:21.112 [2024-07-22 16:02:23.873643] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:27.672 Initializing NVMe Controllers 00:25:27.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:27.672 Initialization complete. Launching workers. 00:25:27.672 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 262 00:25:27.672 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 549, failed to submit 33 00:25:27.672 success 453, unsuccess 96, failed 0 00:25:27.672 16:02:29 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:25:27.672 16:02:29 -- target/zcopy.sh@60 -- # nvmftestfini 00:25:27.672 16:02:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:27.672 16:02:29 -- nvmf/common.sh@116 -- # sync 00:25:27.672 16:02:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:27.672 16:02:29 -- nvmf/common.sh@119 -- # set +e 00:25:27.672 16:02:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:27.672 16:02:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:27.672 rmmod nvme_tcp 00:25:27.672 rmmod nvme_fabrics 00:25:27.672 rmmod nvme_keyring 00:25:27.672 16:02:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:27.672 16:02:30 -- nvmf/common.sh@123 -- # set -e 00:25:27.672 16:02:30 -- nvmf/common.sh@124 -- # return 0 00:25:27.672 16:02:30 -- nvmf/common.sh@477 -- # '[' -n 62491 ']' 00:25:27.672 16:02:30 -- nvmf/common.sh@478 -- # killprocess 62491 00:25:27.672 16:02:30 -- common/autotest_common.sh@926 -- # '[' -z 62491 ']' 00:25:27.672 16:02:30 -- common/autotest_common.sh@930 -- # kill -0 62491 00:25:27.672 16:02:30 -- common/autotest_common.sh@931 -- # uname 00:25:27.672 16:02:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:27.672 16:02:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62491 00:25:27.672 killing process with pid 62491 00:25:27.672 16:02:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:27.672 16:02:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:27.672 16:02:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62491' 00:25:27.672 16:02:30 -- common/autotest_common.sh@945 -- # kill 62491 00:25:27.672 16:02:30 -- common/autotest_common.sh@950 -- # wait 62491 00:25:27.672 16:02:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:27.672 16:02:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:27.672 16:02:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:27.672 16:02:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.672 16:02:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:27.672 16:02:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.672 16:02:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.672 16:02:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.672 16:02:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:27.672 00:25:27.672 real 0m24.389s 00:25:27.672 user 0m39.946s 00:25:27.672 sys 0m6.571s 00:25:27.672 16:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.672 16:02:30 -- common/autotest_common.sh@10 -- # set +x 00:25:27.672 ************************************ 00:25:27.672 END TEST nvmf_zcopy 00:25:27.672 ************************************ 00:25:27.672 16:02:30 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:25:27.672 16:02:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:27.672 16:02:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.672 16:02:30 -- common/autotest_common.sh@10 -- # set +x 00:25:27.672 ************************************ 00:25:27.672 START TEST nvmf_nmic 00:25:27.672 ************************************ 00:25:27.672 16:02:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:25:27.672 * Looking for test storage... 00:25:27.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:27.672 16:02:30 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:27.672 16:02:30 -- nvmf/common.sh@7 -- # uname -s 00:25:27.672 16:02:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.672 16:02:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.672 16:02:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.672 16:02:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.672 16:02:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.672 16:02:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.672 16:02:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.672 16:02:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.672 16:02:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.672 16:02:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.672 16:02:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:27.672 16:02:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:27.672 16:02:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.672 16:02:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.672 16:02:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:27.672 16:02:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:27.672 16:02:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.672 16:02:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.672 16:02:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.672 16:02:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.672 16:02:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.672 16:02:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.672 16:02:30 -- paths/export.sh@5 -- # export PATH 00:25:27.673 16:02:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.673 16:02:30 -- nvmf/common.sh@46 -- # : 0 00:25:27.673 16:02:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:27.673 16:02:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:27.673 16:02:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:27.673 16:02:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.673 16:02:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.673 16:02:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:27.673 16:02:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:27.673 16:02:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:27.673 16:02:30 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:27.673 16:02:30 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:27.673 16:02:30 -- target/nmic.sh@14 -- # nvmftestinit 00:25:27.673 16:02:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:27.673 16:02:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.673 16:02:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:27.673 16:02:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:27.673 16:02:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:27.673 16:02:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.673 16:02:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.673 16:02:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.673 16:02:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:27.673 16:02:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:27.673 16:02:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:27.673 16:02:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:27.673 16:02:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:27.673 16:02:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:27.673 16:02:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.673 16:02:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.673 16:02:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:27.673 16:02:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:27.673 16:02:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:27.673 16:02:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:27.673 16:02:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:27.673 16:02:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.673 16:02:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:27.673 16:02:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:27.673 16:02:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:27.673 16:02:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:27.673 16:02:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:27.673 16:02:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:27.673 Cannot find device "nvmf_tgt_br" 00:25:27.673 16:02:30 -- nvmf/common.sh@154 -- # true 00:25:27.673 16:02:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:27.673 Cannot find device "nvmf_tgt_br2" 00:25:27.673 16:02:30 -- nvmf/common.sh@155 -- # true 00:25:27.673 16:02:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:27.673 16:02:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:27.673 Cannot find device "nvmf_tgt_br" 00:25:27.673 16:02:30 -- nvmf/common.sh@157 -- # true 00:25:27.673 16:02:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:27.673 Cannot find device "nvmf_tgt_br2" 00:25:27.673 16:02:30 -- nvmf/common.sh@158 -- # true 00:25:27.673 16:02:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:27.931 16:02:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:27.931 16:02:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:27.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:27.931 16:02:30 -- nvmf/common.sh@161 -- # true 00:25:27.931 16:02:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:27.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:27.931 16:02:30 -- nvmf/common.sh@162 -- # true 00:25:27.931 16:02:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:27.931 16:02:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:27.931 16:02:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:27.931 16:02:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:27.931 16:02:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:27.931 16:02:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:27.931 16:02:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:27.931 16:02:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:27.931 16:02:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:27.931 16:02:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:27.932 16:02:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:27.932 16:02:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:27.932 16:02:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:27.932 16:02:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:27.932 16:02:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:27.932 16:02:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:27.932 16:02:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:27.932 16:02:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:27.932 16:02:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:27.932 16:02:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:27.932 16:02:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:27.932 16:02:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:27.932 16:02:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:27.932 16:02:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:27.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:25:27.932 00:25:27.932 --- 10.0.0.2 ping statistics --- 00:25:27.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.932 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:27.932 16:02:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:27.932 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:27.932 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:27.932 00:25:27.932 --- 10.0.0.3 ping statistics --- 00:25:27.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.932 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:28.190 16:02:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:28.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:25:28.190 00:25:28.190 --- 10.0.0.1 ping statistics --- 00:25:28.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.190 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:25:28.190 16:02:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.190 16:02:30 -- nvmf/common.sh@421 -- # return 0 00:25:28.190 16:02:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:28.190 16:02:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.190 16:02:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:28.190 16:02:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:28.190 16:02:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.190 16:02:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:28.190 16:02:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:28.190 16:02:30 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:25:28.190 16:02:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:28.190 16:02:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:28.190 16:02:30 -- common/autotest_common.sh@10 -- # set +x 00:25:28.190 16:02:30 -- nvmf/common.sh@469 -- # nvmfpid=62964 00:25:28.190 16:02:30 -- nvmf/common.sh@470 -- # waitforlisten 62964 00:25:28.190 16:02:30 -- common/autotest_common.sh@819 -- # '[' -z 62964 ']' 00:25:28.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.190 16:02:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.190 16:02:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:28.190 16:02:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.190 16:02:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:28.191 16:02:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:28.191 16:02:30 -- common/autotest_common.sh@10 -- # set +x 00:25:28.191 [2024-07-22 16:02:30.881132] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:28.191 [2024-07-22 16:02:30.881233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.191 [2024-07-22 16:02:31.023382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.448 [2024-07-22 16:02:31.094077] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:28.449 [2024-07-22 16:02:31.094441] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.449 [2024-07-22 16:02:31.094535] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.449 [2024-07-22 16:02:31.094853] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.449 [2024-07-22 16:02:31.095070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.449 [2024-07-22 16:02:31.095162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.449 [2024-07-22 16:02:31.095276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.449 [2024-07-22 16:02:31.095283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.384 16:02:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:29.384 16:02:31 -- common/autotest_common.sh@852 -- # return 0 00:25:29.384 16:02:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:29.384 16:02:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:29.384 16:02:31 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 16:02:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.384 16:02:31 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.384 16:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.384 16:02:31 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 [2024-07-22 16:02:31.939341] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.384 16:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.384 16:02:31 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:29.384 16:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.384 16:02:31 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 Malloc0 00:25:29.384 16:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.384 16:02:31 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:29.384 16:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.384 16:02:31 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 16:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.384 16:02:31 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:29.384 16:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.384 16:02:31 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 16:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.384 16:02:32 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.384 16:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.384 16:02:32 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 [2024-07-22 16:02:32.011064] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.384 test case1: single bdev can't be used in multiple subsystems 00:25:29.384 16:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.384 16:02:32 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:25:29.384 16:02:32 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:29.384 16:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.384 16:02:32 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 16:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.384 16:02:32 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:29.384 16:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.384 16:02:32 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 16:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.384 16:02:32 -- target/nmic.sh@28 -- # nmic_status=0 00:25:29.384 16:02:32 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:25:29.384 16:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.384 16:02:32 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 [2024-07-22 16:02:32.038881] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:25:29.384 [2024-07-22 16:02:32.038932] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:25:29.384 [2024-07-22 16:02:32.038947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:29.384 request: 00:25:29.384 { 00:25:29.384 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:25:29.384 "namespace": { 00:25:29.384 "bdev_name": "Malloc0" 00:25:29.384 }, 00:25:29.384 "method": "nvmf_subsystem_add_ns", 00:25:29.384 "req_id": 1 00:25:29.384 } 00:25:29.384 Got JSON-RPC error response 00:25:29.384 response: 00:25:29.384 { 00:25:29.384 "code": -32602, 00:25:29.384 "message": "Invalid parameters" 00:25:29.384 } 00:25:29.384 16:02:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:29.384 16:02:32 -- target/nmic.sh@29 -- # nmic_status=1 00:25:29.384 16:02:32 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:25:29.384 16:02:32 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:25:29.384 Adding namespace failed - expected result. 00:25:29.384 16:02:32 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:25:29.384 test case2: host connect to nvmf target in multiple paths 00:25:29.384 16:02:32 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:29.384 16:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.384 16:02:32 -- common/autotest_common.sh@10 -- # set +x 00:25:29.384 [2024-07-22 16:02:32.051060] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:29.384 16:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.384 16:02:32 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:29.384 16:02:32 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:25:29.642 16:02:32 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:25:29.642 16:02:32 -- common/autotest_common.sh@1177 -- # local i=0 00:25:29.642 16:02:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.642 16:02:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:25:29.642 16:02:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:25:31.543 16:02:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:25:31.543 16:02:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:25:31.543 16:02:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:25:31.543 16:02:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:25:31.543 16:02:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.543 16:02:34 -- common/autotest_common.sh@1187 -- # return 0 00:25:31.543 16:02:34 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:31.543 [global] 00:25:31.543 thread=1 00:25:31.543 invalidate=1 00:25:31.543 rw=write 00:25:31.543 time_based=1 00:25:31.543 runtime=1 00:25:31.543 ioengine=libaio 00:25:31.543 direct=1 00:25:31.543 bs=4096 00:25:31.543 iodepth=1 00:25:31.543 norandommap=0 00:25:31.543 numjobs=1 00:25:31.543 00:25:31.543 verify_dump=1 00:25:31.543 verify_backlog=512 00:25:31.543 verify_state_save=0 00:25:31.543 do_verify=1 00:25:31.543 verify=crc32c-intel 00:25:31.543 [job0] 00:25:31.543 filename=/dev/nvme0n1 00:25:31.543 Could not set queue depth (nvme0n1) 00:25:31.802 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:31.802 fio-3.35 00:25:31.802 Starting 1 thread 00:25:33.180 00:25:33.180 job0: (groupid=0, jobs=1): err= 0: pid=63050: Mon Jul 22 16:02:35 2024 00:25:33.180 read: IOPS=2903, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:25:33.180 slat (nsec): min=13925, max=78933, avg=17791.86, stdev=4146.55 00:25:33.180 clat (usec): min=138, max=264, avg=174.14, stdev=17.12 00:25:33.180 lat (usec): min=153, max=286, avg=191.94, stdev=18.36 00:25:33.180 clat percentiles (usec): 00:25:33.180 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:25:33.180 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 00:25:33.180 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:25:33.180 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 265], 99.95th=[ 265], 00:25:33.180 | 99.99th=[ 265] 00:25:33.180 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:25:33.180 slat (usec): min=16, max=123, avg=27.06, stdev= 6.09 00:25:33.180 clat (usec): min=88, max=352, avg=112.56, stdev=15.76 00:25:33.180 lat (usec): min=110, max=401, avg=139.62, stdev=18.40 00:25:33.180 clat percentiles (usec): 00:25:33.180 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 100], 00:25:33.180 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 111], 60.00th=[ 116], 00:25:33.180 | 70.00th=[ 120], 80.00th=[ 125], 90.00th=[ 133], 95.00th=[ 139], 00:25:33.180 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 273], 99.95th=[ 343], 00:25:33.180 | 99.99th=[ 355] 00:25:33.180 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:25:33.180 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:33.180 lat (usec) : 100=11.14%, 250=88.66%, 500=0.20% 00:25:33.180 cpu : usr=2.70%, sys=10.70%, ctx=5980, majf=0, minf=2 00:25:33.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:33.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.180 issued rwts: total=2906,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:33.180 00:25:33.180 Run status group 0 (all jobs): 00:25:33.180 READ: bw=11.3MiB/s (11.9MB/s), 11.3MiB/s-11.3MiB/s (11.9MB/s-11.9MB/s), io=11.4MiB (11.9MB), run=1001-1001msec 00:25:33.180 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:25:33.180 00:25:33.180 Disk stats (read/write): 00:25:33.180 nvme0n1: ios=2610/2824, merge=0/0, ticks=468/345, in_queue=813, util=91.08% 00:25:33.180 16:02:35 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:33.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:25:33.180 16:02:35 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:33.180 16:02:35 -- common/autotest_common.sh@1198 -- # local i=0 00:25:33.180 16:02:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:33.180 16:02:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:33.180 16:02:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:33.180 16:02:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:33.180 16:02:35 -- common/autotest_common.sh@1210 -- # return 0 00:25:33.180 16:02:35 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:33.180 16:02:35 -- target/nmic.sh@53 -- # nvmftestfini 00:25:33.180 16:02:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:33.180 16:02:35 -- nvmf/common.sh@116 -- # sync 00:25:33.180 16:02:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:33.180 16:02:35 -- nvmf/common.sh@119 -- # set +e 00:25:33.180 16:02:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:33.180 16:02:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:33.180 rmmod nvme_tcp 00:25:33.180 rmmod nvme_fabrics 00:25:33.180 rmmod nvme_keyring 00:25:33.180 16:02:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:33.180 16:02:35 -- nvmf/common.sh@123 -- # set -e 00:25:33.180 16:02:35 -- nvmf/common.sh@124 -- # return 0 00:25:33.180 16:02:35 -- nvmf/common.sh@477 -- # '[' -n 62964 ']' 00:25:33.180 16:02:35 -- nvmf/common.sh@478 -- # killprocess 62964 00:25:33.180 16:02:35 -- common/autotest_common.sh@926 -- # '[' -z 62964 ']' 00:25:33.180 16:02:35 -- common/autotest_common.sh@930 -- # kill -0 62964 00:25:33.180 16:02:35 -- common/autotest_common.sh@931 -- # uname 00:25:33.180 16:02:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:33.180 16:02:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62964 00:25:33.180 killing process with pid 62964 00:25:33.180 16:02:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:33.180 16:02:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:33.180 16:02:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62964' 00:25:33.180 16:02:35 -- common/autotest_common.sh@945 -- # kill 62964 00:25:33.180 16:02:35 -- common/autotest_common.sh@950 -- # wait 62964 00:25:33.180 16:02:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:33.180 16:02:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:33.180 16:02:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:33.180 16:02:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:33.180 16:02:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:33.180 16:02:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.180 16:02:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.180 16:02:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.439 16:02:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:33.439 ************************************ 00:25:33.439 END TEST nvmf_nmic 00:25:33.439 ************************************ 00:25:33.439 00:25:33.439 real 0m5.708s 00:25:33.439 user 0m18.637s 00:25:33.439 sys 0m2.062s 00:25:33.439 16:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.439 16:02:36 -- common/autotest_common.sh@10 -- # set +x 00:25:33.439 16:02:36 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:25:33.439 16:02:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:33.439 16:02:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:33.439 16:02:36 -- common/autotest_common.sh@10 -- # set +x 00:25:33.439 ************************************ 00:25:33.439 START TEST nvmf_fio_target 00:25:33.439 ************************************ 00:25:33.439 16:02:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:25:33.439 * Looking for test storage... 00:25:33.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:33.439 16:02:36 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:33.439 16:02:36 -- nvmf/common.sh@7 -- # uname -s 00:25:33.439 16:02:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.439 16:02:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.439 16:02:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.439 16:02:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.439 16:02:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.439 16:02:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.439 16:02:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.439 16:02:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.439 16:02:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.439 16:02:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.439 16:02:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:33.439 16:02:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:33.439 16:02:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.439 16:02:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.439 16:02:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:33.439 16:02:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.439 16:02:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.439 16:02:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.439 16:02:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.439 16:02:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.439 16:02:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.439 16:02:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.439 16:02:36 -- paths/export.sh@5 -- # export PATH 00:25:33.439 16:02:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.439 16:02:36 -- nvmf/common.sh@46 -- # : 0 00:25:33.439 16:02:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:33.439 16:02:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:33.439 16:02:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:33.439 16:02:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.439 16:02:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.439 16:02:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:33.439 16:02:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:33.439 16:02:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:33.439 16:02:36 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.439 16:02:36 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.439 16:02:36 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.439 16:02:36 -- target/fio.sh@16 -- # nvmftestinit 00:25:33.439 16:02:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:33.439 16:02:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.439 16:02:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:33.439 16:02:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:33.439 16:02:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:33.439 16:02:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.439 16:02:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.440 16:02:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.440 16:02:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:33.440 16:02:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:33.440 16:02:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:33.440 16:02:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:33.440 16:02:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:33.440 16:02:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:33.440 16:02:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.440 16:02:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.440 16:02:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:33.440 16:02:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:33.440 16:02:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:33.440 16:02:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:33.440 16:02:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:33.440 16:02:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.440 16:02:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:33.440 16:02:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:33.440 16:02:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:33.440 16:02:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:33.440 16:02:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:33.440 16:02:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:33.440 Cannot find device "nvmf_tgt_br" 00:25:33.440 16:02:36 -- nvmf/common.sh@154 -- # true 00:25:33.440 16:02:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:33.440 Cannot find device "nvmf_tgt_br2" 00:25:33.440 16:02:36 -- nvmf/common.sh@155 -- # true 00:25:33.440 16:02:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:33.440 16:02:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:33.440 Cannot find device "nvmf_tgt_br" 00:25:33.440 16:02:36 -- nvmf/common.sh@157 -- # true 00:25:33.440 16:02:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:33.440 Cannot find device "nvmf_tgt_br2" 00:25:33.440 16:02:36 -- nvmf/common.sh@158 -- # true 00:25:33.440 16:02:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:33.698 16:02:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:33.698 16:02:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:33.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.698 16:02:36 -- nvmf/common.sh@161 -- # true 00:25:33.698 16:02:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:33.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.698 16:02:36 -- nvmf/common.sh@162 -- # true 00:25:33.698 16:02:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:33.698 16:02:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:33.698 16:02:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:33.698 16:02:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:33.698 16:02:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:33.698 16:02:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:33.698 16:02:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:33.698 16:02:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:33.698 16:02:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:33.698 16:02:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:33.698 16:02:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:33.698 16:02:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:33.698 16:02:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:33.698 16:02:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:33.698 16:02:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:33.698 16:02:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:33.698 16:02:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:33.698 16:02:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:33.698 16:02:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:33.698 16:02:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:33.698 16:02:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:33.698 16:02:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:33.698 16:02:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:33.698 16:02:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:33.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:25:33.698 00:25:33.698 --- 10.0.0.2 ping statistics --- 00:25:33.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.698 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:25:33.698 16:02:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:33.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:33.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:25:33.698 00:25:33.698 --- 10.0.0.3 ping statistics --- 00:25:33.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.698 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:33.698 16:02:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:33.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:25:33.698 00:25:33.698 --- 10.0.0.1 ping statistics --- 00:25:33.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.698 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:25:33.698 16:02:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.698 16:02:36 -- nvmf/common.sh@421 -- # return 0 00:25:33.698 16:02:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:33.698 16:02:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.698 16:02:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:33.698 16:02:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:33.698 16:02:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.698 16:02:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:33.698 16:02:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:33.957 16:02:36 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:25:33.957 16:02:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:33.957 16:02:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:33.957 16:02:36 -- common/autotest_common.sh@10 -- # set +x 00:25:33.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.957 16:02:36 -- nvmf/common.sh@469 -- # nvmfpid=63228 00:25:33.957 16:02:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:33.957 16:02:36 -- nvmf/common.sh@470 -- # waitforlisten 63228 00:25:33.957 16:02:36 -- common/autotest_common.sh@819 -- # '[' -z 63228 ']' 00:25:33.957 16:02:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.957 16:02:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:33.957 16:02:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.957 16:02:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:33.957 16:02:36 -- common/autotest_common.sh@10 -- # set +x 00:25:33.957 [2024-07-22 16:02:36.642827] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:33.957 [2024-07-22 16:02:36.642935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.957 [2024-07-22 16:02:36.792886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.224 [2024-07-22 16:02:36.864660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:34.224 [2024-07-22 16:02:36.865064] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.224 [2024-07-22 16:02:36.865252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.224 [2024-07-22 16:02:36.865465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.224 [2024-07-22 16:02:36.865893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.224 [2024-07-22 16:02:36.866095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.224 [2024-07-22 16:02:36.866033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.224 [2024-07-22 16:02:36.866099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.791 16:02:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:34.791 16:02:37 -- common/autotest_common.sh@852 -- # return 0 00:25:34.791 16:02:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:34.791 16:02:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:34.791 16:02:37 -- common/autotest_common.sh@10 -- # set +x 00:25:34.791 16:02:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.791 16:02:37 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:35.050 [2024-07-22 16:02:37.841209] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.050 16:02:37 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:35.308 16:02:38 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:25:35.308 16:02:38 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:35.874 16:02:38 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:25:35.874 16:02:38 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:35.874 16:02:38 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:25:35.874 16:02:38 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:36.133 16:02:38 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:25:36.133 16:02:38 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:25:36.391 16:02:39 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:36.649 16:02:39 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:25:36.649 16:02:39 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:36.907 16:02:39 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:25:36.907 16:02:39 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:37.166 16:02:39 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:25:37.166 16:02:39 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:25:37.426 16:02:40 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:37.685 16:02:40 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:37.685 16:02:40 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:37.943 16:02:40 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:37.943 16:02:40 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:38.201 16:02:40 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.459 [2024-07-22 16:02:41.207612] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.459 16:02:41 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:25:38.717 16:02:41 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:25:38.975 16:02:41 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:39.234 16:02:41 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:25:39.234 16:02:41 -- common/autotest_common.sh@1177 -- # local i=0 00:25:39.234 16:02:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.234 16:02:41 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:25:39.234 16:02:41 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:25:39.234 16:02:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:25:41.135 16:02:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:25:41.135 16:02:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:25:41.135 16:02:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:25:41.135 16:02:43 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:25:41.135 16:02:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.135 16:02:43 -- common/autotest_common.sh@1187 -- # return 0 00:25:41.135 16:02:43 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:41.135 [global] 00:25:41.135 thread=1 00:25:41.135 invalidate=1 00:25:41.135 rw=write 00:25:41.135 time_based=1 00:25:41.135 runtime=1 00:25:41.135 ioengine=libaio 00:25:41.135 direct=1 00:25:41.135 bs=4096 00:25:41.135 iodepth=1 00:25:41.135 norandommap=0 00:25:41.135 numjobs=1 00:25:41.135 00:25:41.135 verify_dump=1 00:25:41.135 verify_backlog=512 00:25:41.135 verify_state_save=0 00:25:41.135 do_verify=1 00:25:41.135 verify=crc32c-intel 00:25:41.135 [job0] 00:25:41.135 filename=/dev/nvme0n1 00:25:41.135 [job1] 00:25:41.135 filename=/dev/nvme0n2 00:25:41.135 [job2] 00:25:41.135 filename=/dev/nvme0n3 00:25:41.135 [job3] 00:25:41.135 filename=/dev/nvme0n4 00:25:41.135 Could not set queue depth (nvme0n1) 00:25:41.135 Could not set queue depth (nvme0n2) 00:25:41.135 Could not set queue depth (nvme0n3) 00:25:41.135 Could not set queue depth (nvme0n4) 00:25:41.393 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:41.393 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:41.393 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:41.393 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:41.393 fio-3.35 00:25:41.393 Starting 4 threads 00:25:42.768 00:25:42.768 job0: (groupid=0, jobs=1): err= 0: pid=63418: Mon Jul 22 16:02:45 2024 00:25:42.768 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:25:42.768 slat (nsec): min=14951, max=60855, avg=22383.89, stdev=6553.90 00:25:42.768 clat (usec): min=136, max=714, avg=174.87, stdev=21.84 00:25:42.768 lat (usec): min=157, max=750, avg=197.25, stdev=23.23 00:25:42.768 clat percentiles (usec): 00:25:42.768 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:25:42.768 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:25:42.768 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 208], 00:25:42.768 | 99.00th=[ 245], 99.50th=[ 258], 99.90th=[ 297], 99.95th=[ 453], 00:25:42.768 | 99.99th=[ 717] 00:25:42.768 write: IOPS=2841, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:25:42.768 slat (usec): min=22, max=122, avg=33.56, stdev=10.17 00:25:42.768 clat (usec): min=98, max=289, avg=134.95, stdev=16.28 00:25:42.768 lat (usec): min=123, max=411, avg=168.51, stdev=20.84 00:25:42.768 clat percentiles (usec): 00:25:42.768 | 1.00th=[ 106], 5.00th=[ 114], 10.00th=[ 119], 20.00th=[ 123], 00:25:42.768 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:25:42.768 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:25:42.768 | 99.00th=[ 192], 99.50th=[ 204], 99.90th=[ 235], 99.95th=[ 239], 00:25:42.768 | 99.99th=[ 289] 00:25:42.768 bw ( KiB/s): min=12239, max=12239, per=26.57%, avg=12239.00, stdev= 0.00, samples=1 00:25:42.768 iops : min= 3059, max= 3059, avg=3059.00, stdev= 0.00, samples=1 00:25:42.768 lat (usec) : 100=0.02%, 250=99.59%, 500=0.37%, 750=0.02% 00:25:42.768 cpu : usr=3.20%, sys=12.60%, ctx=5406, majf=0, minf=7 00:25:42.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:42.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.768 issued rwts: total=2560,2844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:42.768 job1: (groupid=0, jobs=1): err= 0: pid=63419: Mon Jul 22 16:02:45 2024 00:25:42.768 read: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:25:42.768 slat (nsec): min=11959, max=41069, avg=17017.76, stdev=4654.85 00:25:42.768 clat (usec): min=134, max=1040, avg=169.13, stdev=28.08 00:25:42.768 lat (usec): min=149, max=1053, avg=186.15, stdev=28.80 00:25:42.768 clat percentiles (usec): 00:25:42.768 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:25:42.768 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:25:42.768 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:25:42.768 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 515], 99.95th=[ 996], 00:25:42.768 | 99.99th=[ 1037] 00:25:42.768 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:25:42.768 slat (usec): min=14, max=117, avg=25.85, stdev= 8.78 00:25:42.768 clat (usec): min=91, max=221, avg=130.22, stdev=12.95 00:25:42.768 lat (usec): min=111, max=336, avg=156.07, stdev=17.11 00:25:42.768 clat percentiles (usec): 00:25:42.768 | 1.00th=[ 103], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 121], 00:25:42.768 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:25:42.768 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:25:42.768 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 192], 99.95th=[ 204], 00:25:42.768 | 99.99th=[ 223] 00:25:42.768 bw ( KiB/s): min=12239, max=12239, per=26.57%, avg=12239.00, stdev= 0.00, samples=1 00:25:42.768 iops : min= 3059, max= 3059, avg=3059.00, stdev= 0.00, samples=1 00:25:42.768 lat (usec) : 100=0.26%, 250=99.64%, 500=0.05%, 750=0.02%, 1000=0.02% 00:25:42.768 lat (msec) : 2=0.02% 00:25:42.768 cpu : usr=2.30%, sys=10.20%, ctx=5805, majf=0, minf=7 00:25:42.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:42.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.768 issued rwts: total=2733,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:42.768 job2: (groupid=0, jobs=1): err= 0: pid=63420: Mon Jul 22 16:02:45 2024 00:25:42.768 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:25:42.768 slat (usec): min=13, max=170, avg=20.04, stdev= 6.23 00:25:42.768 clat (usec): min=81, max=2192, avg=183.15, stdev=57.81 00:25:42.768 lat (usec): min=158, max=2211, avg=203.19, stdev=58.01 00:25:42.768 clat percentiles (usec): 00:25:42.768 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:25:42.768 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:25:42.768 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:25:42.768 | 99.00th=[ 243], 99.50th=[ 285], 99.90th=[ 766], 99.95th=[ 1942], 00:25:42.768 | 99.99th=[ 2180] 00:25:42.768 write: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:25:42.768 slat (usec): min=19, max=142, avg=28.93, stdev= 7.69 00:25:42.768 clat (usec): min=105, max=527, avg=141.96, stdev=17.63 00:25:42.768 lat (usec): min=129, max=554, avg=170.88, stdev=19.86 00:25:42.768 clat percentiles (usec): 00:25:42.768 | 1.00th=[ 115], 5.00th=[ 122], 10.00th=[ 126], 20.00th=[ 130], 00:25:42.768 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:25:42.768 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 167], 00:25:42.768 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 243], 99.95th=[ 506], 00:25:42.768 | 99.99th=[ 529] 00:25:42.768 bw ( KiB/s): min=12239, max=12239, per=26.57%, avg=12239.00, stdev= 0.00, samples=1 00:25:42.768 iops : min= 3059, max= 3059, avg=3059.00, stdev= 0.00, samples=1 00:25:42.768 lat (usec) : 100=0.02%, 250=99.51%, 500=0.38%, 750=0.04%, 1000=0.02% 00:25:42.768 lat (msec) : 2=0.02%, 4=0.02% 00:25:42.768 cpu : usr=2.80%, sys=10.20%, ctx=5295, majf=0, minf=9 00:25:42.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:42.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.768 issued rwts: total=2560,2733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:42.768 job3: (groupid=0, jobs=1): err= 0: pid=63421: Mon Jul 22 16:02:45 2024 00:25:42.768 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:25:42.768 slat (nsec): min=12653, max=54092, avg=18656.57, stdev=5112.13 00:25:42.768 clat (usec): min=144, max=676, avg=179.47, stdev=20.70 00:25:42.768 lat (usec): min=162, max=691, avg=198.13, stdev=21.42 00:25:42.768 clat percentiles (usec): 00:25:42.768 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:25:42.768 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:25:42.768 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 204], 00:25:42.768 | 99.00th=[ 217], 99.50th=[ 231], 99.90th=[ 529], 99.95th=[ 652], 00:25:42.768 | 99.99th=[ 676] 00:25:42.768 write: IOPS=2877, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec); 0 zone resets 00:25:42.768 slat (nsec): min=18933, max=81825, avg=25360.53, stdev=6978.53 00:25:42.769 clat (usec): min=103, max=251, avg=141.53, stdev=15.44 00:25:42.769 lat (usec): min=125, max=288, avg=166.89, stdev=17.87 00:25:42.769 clat percentiles (usec): 00:25:42.769 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 130], 00:25:42.769 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:25:42.769 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 167], 00:25:42.769 | 99.00th=[ 196], 99.50th=[ 210], 99.90th=[ 247], 99.95th=[ 251], 00:25:42.769 | 99.99th=[ 251] 00:25:42.769 bw ( KiB/s): min=12239, max=12239, per=26.57%, avg=12239.00, stdev= 0.00, samples=1 00:25:42.769 iops : min= 3059, max= 3059, avg=3059.00, stdev= 0.00, samples=1 00:25:42.769 lat (usec) : 250=99.82%, 500=0.13%, 750=0.06% 00:25:42.769 cpu : usr=3.00%, sys=9.20%, ctx=5445, majf=0, minf=12 00:25:42.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:42.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.769 issued rwts: total=2560,2880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:42.769 00:25:42.769 Run status group 0 (all jobs): 00:25:42.769 READ: bw=40.6MiB/s (42.6MB/s), 9.99MiB/s-10.7MiB/s (10.5MB/s-11.2MB/s), io=40.7MiB (42.7MB), run=1001-1001msec 00:25:42.769 WRITE: bw=45.0MiB/s (47.2MB/s), 10.7MiB/s-12.0MiB/s (11.2MB/s-12.6MB/s), io=45.0MiB (47.2MB), run=1001-1001msec 00:25:42.769 00:25:42.769 Disk stats (read/write): 00:25:42.769 nvme0n1: ios=2101/2560, merge=0/0, ticks=411/391, in_queue=802, util=88.05% 00:25:42.769 nvme0n2: ios=2358/2560, merge=0/0, ticks=410/363, in_queue=773, util=87.45% 00:25:42.769 nvme0n3: ios=2048/2493, merge=0/0, ticks=380/380, in_queue=760, util=89.27% 00:25:42.769 nvme0n4: ios=2124/2560, merge=0/0, ticks=391/392, in_queue=783, util=89.74% 00:25:42.769 16:02:45 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:25:42.769 [global] 00:25:42.769 thread=1 00:25:42.769 invalidate=1 00:25:42.769 rw=randwrite 00:25:42.769 time_based=1 00:25:42.769 runtime=1 00:25:42.769 ioengine=libaio 00:25:42.769 direct=1 00:25:42.769 bs=4096 00:25:42.769 iodepth=1 00:25:42.769 norandommap=0 00:25:42.769 numjobs=1 00:25:42.769 00:25:42.769 verify_dump=1 00:25:42.769 verify_backlog=512 00:25:42.769 verify_state_save=0 00:25:42.769 do_verify=1 00:25:42.769 verify=crc32c-intel 00:25:42.769 [job0] 00:25:42.769 filename=/dev/nvme0n1 00:25:42.769 [job1] 00:25:42.769 filename=/dev/nvme0n2 00:25:42.769 [job2] 00:25:42.769 filename=/dev/nvme0n3 00:25:42.769 [job3] 00:25:42.769 filename=/dev/nvme0n4 00:25:42.769 Could not set queue depth (nvme0n1) 00:25:42.769 Could not set queue depth (nvme0n2) 00:25:42.769 Could not set queue depth (nvme0n3) 00:25:42.769 Could not set queue depth (nvme0n4) 00:25:42.769 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:42.769 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:42.769 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:42.769 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:42.769 fio-3.35 00:25:42.769 Starting 4 threads 00:25:44.144 00:25:44.144 job0: (groupid=0, jobs=1): err= 0: pid=63474: Mon Jul 22 16:02:46 2024 00:25:44.144 read: IOPS=1139, BW=4559KiB/s (4669kB/s)(4564KiB/1001msec) 00:25:44.144 slat (nsec): min=15245, max=62163, avg=27321.56, stdev=6790.47 00:25:44.144 clat (usec): min=163, max=716, avg=371.93, stdev=79.12 00:25:44.144 lat (usec): min=187, max=742, avg=399.25, stdev=81.21 00:25:44.144 clat percentiles (usec): 00:25:44.144 | 1.00th=[ 243], 5.00th=[ 293], 10.00th=[ 310], 20.00th=[ 326], 00:25:44.144 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 351], 00:25:44.144 | 70.00th=[ 363], 80.00th=[ 437], 90.00th=[ 498], 95.00th=[ 523], 00:25:44.144 | 99.00th=[ 644], 99.50th=[ 652], 99.90th=[ 676], 99.95th=[ 717], 00:25:44.144 | 99.99th=[ 717] 00:25:44.144 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:25:44.144 slat (usec): min=19, max=191, avg=40.75, stdev=12.42 00:25:44.144 clat (usec): min=96, max=692, avg=306.91, stdev=90.56 00:25:44.144 lat (usec): min=120, max=751, avg=347.66, stdev=96.09 00:25:44.144 clat percentiles (usec): 00:25:44.144 | 1.00th=[ 111], 5.00th=[ 147], 10.00th=[ 245], 20.00th=[ 262], 00:25:44.144 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:25:44.144 | 70.00th=[ 306], 80.00th=[ 363], 90.00th=[ 457], 95.00th=[ 478], 00:25:44.144 | 99.00th=[ 586], 99.50th=[ 635], 99.90th=[ 685], 99.95th=[ 693], 00:25:44.144 | 99.99th=[ 693] 00:25:44.144 bw ( KiB/s): min= 6536, max= 6536, per=21.30%, avg=6536.00, stdev= 0.00, samples=1 00:25:44.144 iops : min= 1634, max= 1634, avg=1634.00, stdev= 0.00, samples=1 00:25:44.144 lat (usec) : 100=0.11%, 250=6.87%, 500=87.82%, 750=5.19% 00:25:44.144 cpu : usr=2.10%, sys=7.60%, ctx=2677, majf=0, minf=24 00:25:44.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.144 issued rwts: total=1141,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:44.144 job1: (groupid=0, jobs=1): err= 0: pid=63475: Mon Jul 22 16:02:46 2024 00:25:44.144 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:25:44.144 slat (nsec): min=9734, max=82778, avg=19280.90, stdev=7872.03 00:25:44.144 clat (usec): min=245, max=3154, avg=342.82, stdev=83.79 00:25:44.144 lat (usec): min=262, max=3167, avg=362.10, stdev=83.94 00:25:44.144 clat percentiles (usec): 00:25:44.144 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:25:44.144 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:25:44.144 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 400], 00:25:44.144 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 725], 99.95th=[ 3163], 00:25:44.144 | 99.99th=[ 3163] 00:25:44.144 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:25:44.144 slat (nsec): min=14458, max=94697, avg=27811.45, stdev=9347.41 00:25:44.144 clat (usec): min=101, max=531, avg=256.15, stdev=50.14 00:25:44.144 lat (usec): min=137, max=576, avg=283.96, stdev=51.24 00:25:44.144 clat percentiles (usec): 00:25:44.144 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 215], 00:25:44.144 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 262], 00:25:44.144 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 355], 00:25:44.144 | 99.00th=[ 429], 99.50th=[ 445], 99.90th=[ 502], 99.95th=[ 529], 00:25:44.144 | 99.99th=[ 529] 00:25:44.144 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:25:44.144 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:44.144 lat (usec) : 250=24.54%, 500=74.45%, 750=0.98% 00:25:44.144 lat (msec) : 4=0.03% 00:25:44.144 cpu : usr=1.60%, sys=6.30%, ctx=3073, majf=0, minf=7 00:25:44.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.144 issued rwts: total=1536,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:44.144 job2: (groupid=0, jobs=1): err= 0: pid=63476: Mon Jul 22 16:02:46 2024 00:25:44.144 read: IOPS=1535, BW=6140KiB/s (6287kB/s)(6140KiB/1000msec) 00:25:44.144 slat (nsec): min=10032, max=81534, avg=21888.99, stdev=7813.13 00:25:44.144 clat (usec): min=251, max=3230, avg=340.29, stdev=86.21 00:25:44.144 lat (usec): min=266, max=3247, avg=362.17, stdev=86.17 00:25:44.144 clat percentiles (usec): 00:25:44.144 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:25:44.144 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:25:44.144 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 396], 00:25:44.144 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 799], 99.95th=[ 3228], 00:25:44.144 | 99.99th=[ 3228] 00:25:44.144 write: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec); 0 zone resets 00:25:44.144 slat (usec): min=12, max=104, avg=30.14, stdev=11.36 00:25:44.144 clat (usec): min=149, max=506, avg=253.60, stdev=46.34 00:25:44.144 lat (usec): min=184, max=576, avg=283.74, stdev=50.63 00:25:44.144 clat percentiles (usec): 00:25:44.144 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 202], 20.00th=[ 212], 00:25:44.144 | 30.00th=[ 229], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 260], 00:25:44.144 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 343], 00:25:44.144 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[ 506], 99.95th=[ 506], 00:25:44.144 | 99.99th=[ 506] 00:25:44.144 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:25:44.144 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:44.144 lat (usec) : 250=24.75%, 500=74.21%, 750=0.98%, 1000=0.03% 00:25:44.144 lat (msec) : 4=0.03% 00:25:44.144 cpu : usr=1.60%, sys=7.30%, ctx=3072, majf=0, minf=9 00:25:44.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.144 issued rwts: total=1535,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:44.144 job3: (groupid=0, jobs=1): err= 0: pid=63477: Mon Jul 22 16:02:46 2024 00:25:44.144 read: IOPS=2568, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:25:44.144 slat (nsec): min=12458, max=41074, avg=14989.88, stdev=2505.23 00:25:44.144 clat (usec): min=139, max=906, avg=180.98, stdev=26.39 00:25:44.145 lat (usec): min=153, max=927, avg=195.97, stdev=26.66 00:25:44.145 clat percentiles (usec): 00:25:44.145 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:25:44.145 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:25:44.145 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 229], 00:25:44.145 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 293], 00:25:44.145 | 99.99th=[ 906] 00:25:44.145 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:25:44.145 slat (usec): min=15, max=104, avg=22.46, stdev= 5.27 00:25:44.145 clat (usec): min=98, max=1083, avg=135.40, stdev=24.58 00:25:44.145 lat (usec): min=119, max=1109, avg=157.86, stdev=25.46 00:25:44.145 clat percentiles (usec): 00:25:44.145 | 1.00th=[ 102], 5.00th=[ 110], 10.00th=[ 116], 20.00th=[ 122], 00:25:44.145 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:25:44.145 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:25:44.145 | 99.00th=[ 192], 99.50th=[ 208], 99.90th=[ 245], 99.95th=[ 343], 00:25:44.145 | 99.99th=[ 1090] 00:25:44.145 bw ( KiB/s): min=12288, max=12288, per=40.04%, avg=12288.00, stdev= 0.00, samples=1 00:25:44.145 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:44.145 lat (usec) : 100=0.09%, 250=99.08%, 500=0.80%, 1000=0.02% 00:25:44.145 lat (msec) : 2=0.02% 00:25:44.145 cpu : usr=2.90%, sys=8.10%, ctx=5643, majf=0, minf=7 00:25:44.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.145 issued rwts: total=2571,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:44.145 00:25:44.145 Run status group 0 (all jobs): 00:25:44.145 READ: bw=26.5MiB/s (27.8MB/s), 4559KiB/s-10.0MiB/s (4669kB/s-10.5MB/s), io=26.5MiB (27.8MB), run=1000-1001msec 00:25:44.145 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1000-1001msec 00:25:44.145 00:25:44.145 Disk stats (read/write): 00:25:44.145 nvme0n1: ios=1074/1274, merge=0/0, ticks=417/410, in_queue=827, util=88.28% 00:25:44.145 nvme0n2: ios=1203/1536, merge=0/0, ticks=379/362, in_queue=741, util=88.28% 00:25:44.145 nvme0n3: ios=1156/1536, merge=0/0, ticks=383/377, in_queue=760, util=89.09% 00:25:44.145 nvme0n4: ios=2292/2560, merge=0/0, ticks=412/360, in_queue=772, util=89.86% 00:25:44.145 16:02:46 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:25:44.145 [global] 00:25:44.145 thread=1 00:25:44.145 invalidate=1 00:25:44.145 rw=write 00:25:44.145 time_based=1 00:25:44.145 runtime=1 00:25:44.145 ioengine=libaio 00:25:44.145 direct=1 00:25:44.145 bs=4096 00:25:44.145 iodepth=128 00:25:44.145 norandommap=0 00:25:44.145 numjobs=1 00:25:44.145 00:25:44.145 verify_dump=1 00:25:44.145 verify_backlog=512 00:25:44.145 verify_state_save=0 00:25:44.145 do_verify=1 00:25:44.145 verify=crc32c-intel 00:25:44.145 [job0] 00:25:44.145 filename=/dev/nvme0n1 00:25:44.145 [job1] 00:25:44.145 filename=/dev/nvme0n2 00:25:44.145 [job2] 00:25:44.145 filename=/dev/nvme0n3 00:25:44.145 [job3] 00:25:44.145 filename=/dev/nvme0n4 00:25:44.145 Could not set queue depth (nvme0n1) 00:25:44.145 Could not set queue depth (nvme0n2) 00:25:44.145 Could not set queue depth (nvme0n3) 00:25:44.145 Could not set queue depth (nvme0n4) 00:25:44.145 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:44.145 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:44.145 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:44.145 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:44.145 fio-3.35 00:25:44.145 Starting 4 threads 00:25:45.520 00:25:45.520 job0: (groupid=0, jobs=1): err= 0: pid=63537: Mon Jul 22 16:02:47 2024 00:25:45.520 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:25:45.520 slat (usec): min=7, max=6357, avg=180.75, stdev=909.09 00:25:45.520 clat (usec): min=17189, max=25729, avg=23733.11, stdev=1141.13 00:25:45.520 lat (usec): min=22033, max=25749, avg=23913.86, stdev=693.58 00:25:45.520 clat percentiles (usec): 00:25:45.520 | 1.00th=[18482], 5.00th=[22414], 10.00th=[22676], 20.00th=[23200], 00:25:45.520 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:25:45.520 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:25:45.520 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25560], 99.95th=[25822], 00:25:45.520 | 99.99th=[25822] 00:25:45.520 write: IOPS=2731, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1003msec); 0 zone resets 00:25:45.520 slat (usec): min=10, max=7681, avg=187.32, stdev=893.15 00:25:45.520 clat (usec): min=2115, max=28440, avg=23841.53, stdev=3198.64 00:25:45.520 lat (usec): min=2138, max=28462, avg=24028.85, stdev=3084.41 00:25:45.520 clat percentiles (usec): 00:25:45.520 | 1.00th=[ 7701], 5.00th=[19268], 10.00th=[22676], 20.00th=[23462], 00:25:45.520 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:25:45.520 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26084], 95.00th=[26870], 00:25:45.520 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:25:45.520 | 99.99th=[28443] 00:25:45.520 bw ( KiB/s): min= 9120, max=11807, per=16.03%, avg=10463.50, stdev=1900.00, samples=2 00:25:45.520 iops : min= 2280, max= 2951, avg=2615.50, stdev=474.47, samples=2 00:25:45.520 lat (msec) : 4=0.38%, 10=0.60%, 20=3.92%, 50=95.09% 00:25:45.520 cpu : usr=2.40%, sys=9.28%, ctx=167, majf=0, minf=15 00:25:45.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:45.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:45.520 issued rwts: total=2560,2740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:45.520 job1: (groupid=0, jobs=1): err= 0: pid=63538: Mon Jul 22 16:02:47 2024 00:25:45.520 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:25:45.520 slat (usec): min=6, max=2548, avg=82.26, stdev=374.96 00:25:45.520 clat (usec): min=8348, max=12307, avg=11108.17, stdev=478.92 00:25:45.520 lat (usec): min=10364, max=12363, avg=11190.43, stdev=301.23 00:25:45.520 clat percentiles (usec): 00:25:45.520 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[10683], 20.00th=[10945], 00:25:45.520 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11207], 00:25:45.520 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:25:45.520 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12256], 99.95th=[12256], 00:25:45.521 | 99.99th=[12256] 00:25:45.521 write: IOPS=5749, BW=22.5MiB/s (23.5MB/s)(22.5MiB/1002msec); 0 zone resets 00:25:45.521 slat (usec): min=11, max=2536, avg=85.68, stdev=345.79 00:25:45.521 clat (usec): min=141, max=12415, avg=11128.38, stdev=981.32 00:25:45.521 lat (usec): min=1777, max=12454, avg=11214.07, stdev=919.89 00:25:45.521 clat percentiles (usec): 00:25:45.521 | 1.00th=[ 5473], 5.00th=[10290], 10.00th=[10683], 20.00th=[10945], 00:25:45.521 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:25:45.521 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:25:45.521 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12387], 99.95th=[12387], 00:25:45.521 | 99.99th=[12387] 00:25:45.521 bw ( KiB/s): min=20992, max=24072, per=34.52%, avg=22532.00, stdev=2177.89, samples=2 00:25:45.521 iops : min= 5248, max= 6018, avg=5633.00, stdev=544.47, samples=2 00:25:45.521 lat (usec) : 250=0.01% 00:25:45.521 lat (msec) : 2=0.06%, 4=0.22%, 10=3.62%, 20=96.09% 00:25:45.521 cpu : usr=4.20%, sys=17.28%, ctx=359, majf=0, minf=5 00:25:45.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:45.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:45.521 issued rwts: total=5632,5761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:45.521 job2: (groupid=0, jobs=1): err= 0: pid=63539: Mon Jul 22 16:02:47 2024 00:25:45.521 read: IOPS=4912, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1004msec) 00:25:45.521 slat (usec): min=3, max=11954, avg=95.66, stdev=631.59 00:25:45.521 clat (usec): min=841, max=24966, avg=12898.87, stdev=2349.25 00:25:45.521 lat (usec): min=3798, max=24991, avg=12994.53, stdev=2375.80 00:25:45.521 clat percentiles (usec): 00:25:45.521 | 1.00th=[ 4948], 5.00th=[ 9896], 10.00th=[11863], 20.00th=[12256], 00:25:45.521 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[13042], 00:25:45.521 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13960], 95.00th=[17433], 00:25:45.521 | 99.00th=[22152], 99.50th=[23462], 99.90th=[24773], 99.95th=[24773], 00:25:45.521 | 99.99th=[25035] 00:25:45.521 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:25:45.521 slat (usec): min=5, max=11136, avg=96.07, stdev=569.10 00:25:45.521 clat (usec): min=3544, max=25127, avg=12414.23, stdev=1725.46 00:25:45.521 lat (usec): min=3565, max=25150, avg=12510.30, stdev=1684.77 00:25:45.521 clat percentiles (usec): 00:25:45.521 | 1.00th=[ 5145], 5.00th=[ 9372], 10.00th=[11076], 20.00th=[11600], 00:25:45.521 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:25:45.521 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13698], 95.00th=[14091], 00:25:45.521 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[23987], 00:25:45.521 | 99.99th=[25035] 00:25:45.521 bw ( KiB/s): min=20480, max=20480, per=31.38%, avg=20480.00, stdev= 0.00, samples=2 00:25:45.521 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:25:45.521 lat (usec) : 1000=0.01% 00:25:45.521 lat (msec) : 4=0.20%, 10=6.10%, 20=92.08%, 50=1.61% 00:25:45.521 cpu : usr=4.89%, sys=13.96%, ctx=259, majf=0, minf=14 00:25:45.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:45.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:45.521 issued rwts: total=4932,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:45.521 job3: (groupid=0, jobs=1): err= 0: pid=63540: Mon Jul 22 16:02:47 2024 00:25:45.521 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:25:45.521 slat (usec): min=6, max=7085, avg=182.17, stdev=919.98 00:25:45.521 clat (usec): min=15641, max=26338, avg=23687.00, stdev=1262.87 00:25:45.521 lat (usec): min=20822, max=26372, avg=23869.17, stdev=882.76 00:25:45.521 clat percentiles (usec): 00:25:45.521 | 1.00th=[18482], 5.00th=[21365], 10.00th=[22414], 20.00th=[23200], 00:25:45.521 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:25:45.521 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:25:45.521 | 99.00th=[26084], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:25:45.521 | 99.99th=[26346] 00:25:45.521 write: IOPS=2751, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1003msec); 0 zone resets 00:25:45.521 slat (usec): min=15, max=6576, avg=184.44, stdev=867.86 00:25:45.521 clat (usec): min=2293, max=26863, avg=23714.60, stdev=2813.36 00:25:45.521 lat (usec): min=2317, max=26890, avg=23899.04, stdev=2680.70 00:25:45.521 clat percentiles (usec): 00:25:45.521 | 1.00th=[ 7963], 5.00th=[19268], 10.00th=[21365], 20.00th=[23462], 00:25:45.521 | 30.00th=[23725], 40.00th=[23725], 50.00th=[24249], 60.00th=[24511], 00:25:45.521 | 70.00th=[25035], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:25:45.521 | 99.00th=[26870], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:25:45.521 | 99.99th=[26870] 00:25:45.521 bw ( KiB/s): min= 9280, max=11807, per=16.15%, avg=10543.50, stdev=1786.86, samples=2 00:25:45.521 iops : min= 2320, max= 2951, avg=2635.50, stdev=446.18, samples=2 00:25:45.521 lat (msec) : 4=0.15%, 10=0.60%, 20=3.89%, 50=95.36% 00:25:45.521 cpu : usr=2.30%, sys=9.88%, ctx=167, majf=0, minf=15 00:25:45.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:45.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:45.521 issued rwts: total=2560,2760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:45.521 00:25:45.521 Run status group 0 (all jobs): 00:25:45.521 READ: bw=61.0MiB/s (64.0MB/s), 9.97MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=61.3MiB (64.2MB), run=1002-1004msec 00:25:45.521 WRITE: bw=63.7MiB/s (66.8MB/s), 10.7MiB/s-22.5MiB/s (11.2MB/s-23.5MB/s), io=64.0MiB (67.1MB), run=1002-1004msec 00:25:45.521 00:25:45.521 Disk stats (read/write): 00:25:45.521 nvme0n1: ios=2098/2496, merge=0/0, ticks=11542/14029, in_queue=25571, util=88.88% 00:25:45.521 nvme0n2: ios=4737/5120, merge=0/0, ticks=11297/12108, in_queue=23405, util=88.46% 00:25:45.521 nvme0n3: ios=4096/4479, merge=0/0, ticks=50115/50964, in_queue=101079, util=88.47% 00:25:45.521 nvme0n4: ios=2048/2528, merge=0/0, ticks=11548/13826, in_queue=25374, util=89.75% 00:25:45.521 16:02:47 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:25:45.521 [global] 00:25:45.521 thread=1 00:25:45.521 invalidate=1 00:25:45.521 rw=randwrite 00:25:45.521 time_based=1 00:25:45.521 runtime=1 00:25:45.521 ioengine=libaio 00:25:45.521 direct=1 00:25:45.521 bs=4096 00:25:45.521 iodepth=128 00:25:45.521 norandommap=0 00:25:45.521 numjobs=1 00:25:45.521 00:25:45.521 verify_dump=1 00:25:45.521 verify_backlog=512 00:25:45.521 verify_state_save=0 00:25:45.521 do_verify=1 00:25:45.521 verify=crc32c-intel 00:25:45.521 [job0] 00:25:45.521 filename=/dev/nvme0n1 00:25:45.521 [job1] 00:25:45.521 filename=/dev/nvme0n2 00:25:45.521 [job2] 00:25:45.521 filename=/dev/nvme0n3 00:25:45.521 [job3] 00:25:45.521 filename=/dev/nvme0n4 00:25:45.521 Could not set queue depth (nvme0n1) 00:25:45.521 Could not set queue depth (nvme0n2) 00:25:45.521 Could not set queue depth (nvme0n3) 00:25:45.521 Could not set queue depth (nvme0n4) 00:25:45.521 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:45.521 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:45.521 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:45.521 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:45.521 fio-3.35 00:25:45.521 Starting 4 threads 00:25:46.896 00:25:46.896 job0: (groupid=0, jobs=1): err= 0: pid=63598: Mon Jul 22 16:02:49 2024 00:25:46.896 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:25:46.896 slat (usec): min=3, max=12065, avg=165.63, stdev=700.77 00:25:46.896 clat (usec): min=12758, max=32886, avg=20714.81, stdev=3462.39 00:25:46.896 lat (usec): min=12768, max=32902, avg=20880.44, stdev=3493.40 00:25:46.896 clat percentiles (usec): 00:25:46.896 | 1.00th=[14091], 5.00th=[15926], 10.00th=[16909], 20.00th=[18482], 00:25:46.896 | 30.00th=[19268], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:25:46.896 | 70.00th=[20841], 80.00th=[22414], 90.00th=[26346], 95.00th=[28181], 00:25:46.896 | 99.00th=[30278], 99.50th=[31589], 99.90th=[32900], 99.95th=[32900], 00:25:46.896 | 99.99th=[32900] 00:25:46.896 write: IOPS=3198, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1006msec); 0 zone resets 00:25:46.896 slat (usec): min=4, max=9586, avg=145.45, stdev=582.83 00:25:46.896 clat (usec): min=2499, max=31299, avg=19847.46, stdev=4135.99 00:25:46.896 lat (usec): min=7238, max=31326, avg=19992.91, stdev=4128.55 00:25:46.896 clat percentiles (usec): 00:25:46.896 | 1.00th=[ 8291], 5.00th=[11076], 10.00th=[13566], 20.00th=[15795], 00:25:46.896 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20841], 60.00th=[21627], 00:25:46.896 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23725], 95.00th=[25035], 00:25:46.896 | 99.00th=[28443], 99.50th=[30016], 99.90th=[30540], 99.95th=[31327], 00:25:46.896 | 99.99th=[31327] 00:25:46.896 bw ( KiB/s): min=12360, max=12368, per=24.73%, avg=12364.00, stdev= 5.66, samples=2 00:25:46.896 iops : min= 3090, max= 3092, avg=3091.00, stdev= 1.41, samples=2 00:25:46.896 lat (msec) : 4=0.02%, 10=1.05%, 20=39.84%, 50=59.09% 00:25:46.896 cpu : usr=2.79%, sys=9.05%, ctx=979, majf=0, minf=9 00:25:46.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:46.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:46.896 issued rwts: total=3072,3218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:46.896 job1: (groupid=0, jobs=1): err= 0: pid=63599: Mon Jul 22 16:02:49 2024 00:25:46.896 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:25:46.896 slat (usec): min=8, max=16859, avg=159.77, stdev=1116.48 00:25:46.896 clat (usec): min=13284, max=42081, avg=21878.72, stdev=3410.15 00:25:46.896 lat (usec): min=13300, max=42106, avg=22038.50, stdev=3503.32 00:25:46.896 clat percentiles (usec): 00:25:46.896 | 1.00th=[15664], 5.00th=[17433], 10.00th=[19530], 20.00th=[19792], 00:25:46.896 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20841], 00:25:46.896 | 70.00th=[22676], 80.00th=[24773], 90.00th=[27657], 95.00th=[28967], 00:25:46.896 | 99.00th=[29754], 99.50th=[32900], 99.90th=[34341], 99.95th=[35914], 00:25:46.896 | 99.99th=[42206] 00:25:46.896 write: IOPS=3138, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1004msec); 0 zone resets 00:25:46.896 slat (usec): min=12, max=11087, avg=153.61, stdev=976.10 00:25:46.896 clat (usec): min=1003, max=27309, avg=19047.60, stdev=3681.02 00:25:46.896 lat (usec): min=5487, max=27357, avg=19201.21, stdev=3582.90 00:25:46.896 clat percentiles (usec): 00:25:46.896 | 1.00th=[ 6521], 5.00th=[11994], 10.00th=[13304], 20.00th=[17695], 00:25:46.896 | 30.00th=[18744], 40.00th=[19268], 50.00th=[19792], 60.00th=[20055], 00:25:46.896 | 70.00th=[20317], 80.00th=[20841], 90.00th=[23462], 95.00th=[25035], 00:25:46.896 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:25:46.896 | 99.99th=[27395] 00:25:46.896 bw ( KiB/s): min=12288, max=12288, per=24.58%, avg=12288.00, stdev= 0.00, samples=2 00:25:46.896 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:25:46.896 lat (msec) : 2=0.02%, 10=1.03%, 20=39.03%, 50=59.92% 00:25:46.896 cpu : usr=3.59%, sys=9.67%, ctx=137, majf=0, minf=8 00:25:46.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:46.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:46.896 issued rwts: total=3072,3151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:46.896 job2: (groupid=0, jobs=1): err= 0: pid=63600: Mon Jul 22 16:02:49 2024 00:25:46.896 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:25:46.896 slat (usec): min=8, max=8965, avg=159.97, stdev=653.18 00:25:46.896 clat (usec): min=7176, max=29051, avg=19917.38, stdev=2861.22 00:25:46.896 lat (usec): min=7188, max=29071, avg=20077.35, stdev=2884.06 00:25:46.896 clat percentiles (usec): 00:25:46.896 | 1.00th=[13566], 5.00th=[15008], 10.00th=[16712], 20.00th=[18220], 00:25:46.896 | 30.00th=[19268], 40.00th=[19530], 50.00th=[20055], 60.00th=[20317], 00:25:46.896 | 70.00th=[20579], 80.00th=[21627], 90.00th=[23462], 95.00th=[25035], 00:25:46.896 | 99.00th=[27657], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:25:46.896 | 99.99th=[28967] 00:25:46.896 write: IOPS=3109, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1003msec); 0 zone resets 00:25:46.896 slat (usec): min=4, max=8273, avg=155.97, stdev=616.31 00:25:46.896 clat (usec): min=318, max=30161, avg=20706.86, stdev=3576.77 00:25:46.896 lat (usec): min=2785, max=30733, avg=20862.83, stdev=3598.22 00:25:46.896 clat percentiles (usec): 00:25:46.896 | 1.00th=[ 6783], 5.00th=[14877], 10.00th=[15664], 20.00th=[19006], 00:25:46.896 | 30.00th=[20055], 40.00th=[20841], 50.00th=[21365], 60.00th=[21890], 00:25:46.896 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23200], 95.00th=[25560], 00:25:46.896 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30016], 99.95th=[30016], 00:25:46.896 | 99.99th=[30278] 00:25:46.896 bw ( KiB/s): min=12288, max=12288, per=24.58%, avg=12288.00, stdev= 0.00, samples=2 00:25:46.896 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:25:46.896 lat (usec) : 500=0.02% 00:25:46.896 lat (msec) : 4=0.24%, 10=0.78%, 20=38.59%, 50=60.38% 00:25:46.896 cpu : usr=3.49%, sys=7.98%, ctx=904, majf=0, minf=11 00:25:46.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:46.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:46.897 issued rwts: total=3072,3119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:46.897 job3: (groupid=0, jobs=1): err= 0: pid=63601: Mon Jul 22 16:02:49 2024 00:25:46.897 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:25:46.897 slat (usec): min=7, max=9316, avg=148.88, stdev=739.43 00:25:46.897 clat (usec): min=7719, max=40007, avg=20565.46, stdev=3100.85 00:25:46.897 lat (usec): min=7734, max=40023, avg=20714.34, stdev=3078.74 00:25:46.897 clat percentiles (usec): 00:25:46.897 | 1.00th=[10945], 5.00th=[15926], 10.00th=[18220], 20.00th=[19268], 00:25:46.897 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:25:46.897 | 70.00th=[20579], 80.00th=[21890], 90.00th=[24249], 95.00th=[25822], 00:25:46.897 | 99.00th=[31327], 99.50th=[33817], 99.90th=[40109], 99.95th=[40109], 00:25:46.897 | 99.99th=[40109] 00:25:46.897 write: IOPS=3071, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1004msec); 0 zone resets 00:25:46.897 slat (usec): min=11, max=12674, avg=167.84, stdev=1061.18 00:25:46.897 clat (usec): min=1079, max=35816, avg=20502.77, stdev=2685.02 00:25:46.897 lat (usec): min=7277, max=35878, avg=20670.62, stdev=2836.43 00:25:46.897 clat percentiles (usec): 00:25:46.897 | 1.00th=[14615], 5.00th=[15795], 10.00th=[18482], 20.00th=[19268], 00:25:46.897 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:25:46.897 | 70.00th=[20579], 80.00th=[21365], 90.00th=[24511], 95.00th=[25822], 00:25:46.897 | 99.00th=[27919], 99.50th=[29230], 99.90th=[33424], 99.95th=[33817], 00:25:46.897 | 99.99th=[35914] 00:25:46.897 bw ( KiB/s): min=12288, max=12312, per=24.61%, avg=12300.00, stdev=16.97, samples=2 00:25:46.897 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:25:46.897 lat (msec) : 2=0.02%, 10=0.31%, 20=42.92%, 50=56.76% 00:25:46.897 cpu : usr=3.79%, sys=9.27%, ctx=204, majf=0, minf=7 00:25:46.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:46.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:46.897 issued rwts: total=3072,3084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:46.897 00:25:46.897 Run status group 0 (all jobs): 00:25:46.897 READ: bw=47.7MiB/s (50.0MB/s), 11.9MiB/s-12.0MiB/s (12.5MB/s-12.5MB/s), io=48.0MiB (50.3MB), run=1003-1006msec 00:25:46.897 WRITE: bw=48.8MiB/s (51.2MB/s), 12.0MiB/s-12.5MiB/s (12.6MB/s-13.1MB/s), io=49.1MiB (51.5MB), run=1003-1006msec 00:25:46.897 00:25:46.897 Disk stats (read/write): 00:25:46.897 nvme0n1: ios=2610/2754, merge=0/0, ticks=25990/25347, in_queue=51337, util=87.16% 00:25:46.897 nvme0n2: ios=2599/2688, merge=0/0, ticks=53716/47926, in_queue=101642, util=88.53% 00:25:46.897 nvme0n3: ios=2560/2645, merge=0/0, ticks=25012/25506, in_queue=50518, util=87.23% 00:25:46.897 nvme0n4: ios=2560/2647, merge=0/0, ticks=25640/24558, in_queue=50198, util=89.63% 00:25:46.897 16:02:49 -- target/fio.sh@55 -- # sync 00:25:46.897 16:02:49 -- target/fio.sh@59 -- # fio_pid=63615 00:25:46.897 16:02:49 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:25:46.897 16:02:49 -- target/fio.sh@61 -- # sleep 3 00:25:46.897 [global] 00:25:46.897 thread=1 00:25:46.897 invalidate=1 00:25:46.897 rw=read 00:25:46.897 time_based=1 00:25:46.897 runtime=10 00:25:46.897 ioengine=libaio 00:25:46.897 direct=1 00:25:46.897 bs=4096 00:25:46.897 iodepth=1 00:25:46.897 norandommap=1 00:25:46.897 numjobs=1 00:25:46.897 00:25:46.897 [job0] 00:25:46.897 filename=/dev/nvme0n1 00:25:46.897 [job1] 00:25:46.897 filename=/dev/nvme0n2 00:25:46.897 [job2] 00:25:46.897 filename=/dev/nvme0n3 00:25:46.897 [job3] 00:25:46.897 filename=/dev/nvme0n4 00:25:46.897 Could not set queue depth (nvme0n1) 00:25:46.897 Could not set queue depth (nvme0n2) 00:25:46.897 Could not set queue depth (nvme0n3) 00:25:46.897 Could not set queue depth (nvme0n4) 00:25:46.897 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:46.897 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:46.897 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:46.897 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:46.897 fio-3.35 00:25:46.897 Starting 4 threads 00:25:50.205 16:02:52 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:25:50.205 fio: pid=63658, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:50.205 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=35667968, buflen=4096 00:25:50.205 16:02:52 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:25:50.205 fio: pid=63657, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:50.205 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=66846720, buflen=4096 00:25:50.205 16:02:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:50.205 16:02:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:50.463 fio: pid=63655, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:50.463 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=51621888, buflen=4096 00:25:50.721 16:02:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:50.721 16:02:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:50.980 fio: pid=63656, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:50.980 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=53882880, buflen=4096 00:25:50.980 00:25:50.980 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=63655: Mon Jul 22 16:02:53 2024 00:25:50.980 read: IOPS=3490, BW=13.6MiB/s (14.3MB/s)(49.2MiB/3611msec) 00:25:50.980 slat (usec): min=10, max=19881, avg=22.44, stdev=232.40 00:25:50.980 clat (usec): min=3, max=7288, avg=262.10, stdev=117.91 00:25:50.980 lat (usec): min=138, max=20083, avg=284.54, stdev=259.95 00:25:50.980 clat percentiles (usec): 00:25:50.980 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 159], 00:25:50.980 | 30.00th=[ 172], 40.00th=[ 227], 50.00th=[ 302], 60.00th=[ 318], 00:25:50.980 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 367], 00:25:50.980 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 865], 99.95th=[ 1369], 00:25:50.980 | 99.99th=[ 2835] 00:25:50.980 bw ( KiB/s): min=10680, max=21760, per=25.12%, avg=13150.67, stdev=4252.77, samples=6 00:25:50.980 iops : min= 2670, max= 5440, avg=3287.67, stdev=1063.19, samples=6 00:25:50.980 lat (usec) : 4=0.03%, 250=44.82%, 500=54.71%, 750=0.29%, 1000=0.07% 00:25:50.980 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01% 00:25:50.980 cpu : usr=1.61%, sys=6.01%, ctx=12627, majf=0, minf=1 00:25:50.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.980 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.980 issued rwts: total=12604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:50.980 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=63656: Mon Jul 22 16:02:53 2024 00:25:50.980 read: IOPS=3390, BW=13.2MiB/s (13.9MB/s)(51.4MiB/3880msec) 00:25:50.980 slat (usec): min=9, max=12344, avg=21.24, stdev=187.89 00:25:50.980 clat (usec): min=102, max=13898, avg=271.72, stdev=156.08 00:25:50.980 lat (usec): min=134, max=13930, avg=292.96, stdev=243.54 00:25:50.980 clat percentiles (usec): 00:25:50.980 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 159], 00:25:50.980 | 30.00th=[ 182], 40.00th=[ 260], 50.00th=[ 306], 60.00th=[ 318], 00:25:50.980 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 375], 00:25:50.980 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 791], 99.95th=[ 1598], 00:25:50.980 | 99.99th=[ 3458] 00:25:50.980 bw ( KiB/s): min=10800, max=19498, per=24.37%, avg=12759.14, stdev=3080.20, samples=7 00:25:50.980 iops : min= 2700, max= 4874, avg=3189.71, stdev=769.87, samples=7 00:25:50.980 lat (usec) : 250=37.37%, 500=62.14%, 750=0.36%, 1000=0.05% 00:25:50.980 lat (msec) : 2=0.04%, 4=0.02%, 20=0.01% 00:25:50.980 cpu : usr=1.39%, sys=5.57%, ctx=13172, majf=0, minf=1 00:25:50.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.980 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.980 issued rwts: total=13156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:50.980 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=63657: Mon Jul 22 16:02:53 2024 00:25:50.980 read: IOPS=4895, BW=19.1MiB/s (20.1MB/s)(63.8MiB/3334msec) 00:25:50.980 slat (usec): min=12, max=11520, avg=18.04, stdev=108.97 00:25:50.980 clat (usec): min=4, max=3723, avg=184.33, stdev=45.38 00:25:50.980 lat (usec): min=145, max=11694, avg=202.37, stdev=118.22 00:25:50.980 clat percentiles (usec): 00:25:50.980 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 163], 00:25:50.980 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 00:25:50.980 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 221], 95.00th=[ 239], 00:25:50.980 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 478], 99.95th=[ 725], 00:25:50.980 | 99.99th=[ 1909] 00:25:50.980 bw ( KiB/s): min=19336, max=20664, per=38.32%, avg=20064.00, stdev=523.32, samples=6 00:25:50.980 iops : min= 4834, max= 5166, avg=5016.00, stdev=130.83, samples=6 00:25:50.980 lat (usec) : 10=0.02%, 250=97.30%, 500=2.59%, 750=0.06%, 1000=0.01% 00:25:50.980 lat (msec) : 2=0.02%, 4=0.01% 00:25:50.980 cpu : usr=2.16%, sys=7.23%, ctx=16329, majf=0, minf=1 00:25:50.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.980 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.980 issued rwts: total=16321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:50.980 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=63658: Mon Jul 22 16:02:53 2024 00:25:50.980 read: IOPS=2902, BW=11.3MiB/s (11.9MB/s)(34.0MiB/3001msec) 00:25:50.980 slat (usec): min=13, max=215, avg=24.33, stdev= 6.83 00:25:50.980 clat (usec): min=150, max=3615, avg=317.35, stdev=66.50 00:25:50.980 lat (usec): min=171, max=3642, avg=341.69, stdev=67.54 00:25:50.980 clat percentiles (usec): 00:25:50.980 | 1.00th=[ 227], 5.00th=[ 247], 10.00th=[ 262], 20.00th=[ 285], 00:25:50.980 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 322], 00:25:50.980 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 359], 95.00th=[ 383], 00:25:50.980 | 99.00th=[ 474], 99.50th=[ 498], 99.90th=[ 635], 99.95th=[ 865], 00:25:50.980 | 99.99th=[ 3621] 00:25:50.981 bw ( KiB/s): min=11336, max=13320, per=22.56%, avg=11812.80, stdev=853.02, samples=5 00:25:50.981 iops : min= 2834, max= 3330, avg=2953.20, stdev=213.25, samples=5 00:25:50.981 lat (usec) : 250=5.68%, 500=93.82%, 750=0.41%, 1000=0.03% 00:25:50.981 lat (msec) : 2=0.01%, 4=0.02% 00:25:50.981 cpu : usr=1.60%, sys=6.40%, ctx=8713, majf=0, minf=1 00:25:50.981 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:50.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.981 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.981 issued rwts: total=8709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.981 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:50.981 00:25:50.981 Run status group 0 (all jobs): 00:25:50.981 READ: bw=51.1MiB/s (53.6MB/s), 11.3MiB/s-19.1MiB/s (11.9MB/s-20.1MB/s), io=198MiB (208MB), run=3001-3880msec 00:25:50.981 00:25:50.981 Disk stats (read/write): 00:25:50.981 nvme0n1: ios=11145/0, merge=0/0, ticks=2997/0, in_queue=2997, util=94.88% 00:25:50.981 nvme0n2: ios=13106/0, merge=0/0, ticks=3403/0, in_queue=3403, util=95.86% 00:25:50.981 nvme0n3: ios=15482/0, merge=0/0, ticks=2872/0, in_queue=2872, util=96.27% 00:25:50.981 nvme0n4: ios=8338/0, merge=0/0, ticks=2668/0, in_queue=2668, util=96.63% 00:25:50.981 16:02:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:50.981 16:02:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:25:51.239 16:02:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:51.239 16:02:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:25:51.497 16:02:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:51.497 16:02:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:25:51.755 16:02:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:51.755 16:02:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:25:52.013 16:02:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:52.013 16:02:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:25:52.271 16:02:54 -- target/fio.sh@69 -- # fio_status=0 00:25:52.271 16:02:54 -- target/fio.sh@70 -- # wait 63615 00:25:52.271 16:02:54 -- target/fio.sh@70 -- # fio_status=4 00:25:52.271 16:02:54 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:52.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:52.271 16:02:54 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:52.271 16:02:54 -- common/autotest_common.sh@1198 -- # local i=0 00:25:52.271 16:02:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:52.271 16:02:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:52.271 16:02:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:52.271 16:02:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:52.271 nvmf hotplug test: fio failed as expected 00:25:52.271 16:02:55 -- common/autotest_common.sh@1210 -- # return 0 00:25:52.271 16:02:55 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:25:52.271 16:02:55 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:25:52.271 16:02:55 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.529 16:02:55 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:25:52.529 16:02:55 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:25:52.529 16:02:55 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:25:52.529 16:02:55 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:25:52.529 16:02:55 -- target/fio.sh@91 -- # nvmftestfini 00:25:52.529 16:02:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:52.529 16:02:55 -- nvmf/common.sh@116 -- # sync 00:25:52.529 16:02:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:52.529 16:02:55 -- nvmf/common.sh@119 -- # set +e 00:25:52.529 16:02:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:52.529 16:02:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:52.529 rmmod nvme_tcp 00:25:52.529 rmmod nvme_fabrics 00:25:52.529 rmmod nvme_keyring 00:25:52.529 16:02:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:52.529 16:02:55 -- nvmf/common.sh@123 -- # set -e 00:25:52.529 16:02:55 -- nvmf/common.sh@124 -- # return 0 00:25:52.529 16:02:55 -- nvmf/common.sh@477 -- # '[' -n 63228 ']' 00:25:52.529 16:02:55 -- nvmf/common.sh@478 -- # killprocess 63228 00:25:52.529 16:02:55 -- common/autotest_common.sh@926 -- # '[' -z 63228 ']' 00:25:52.529 16:02:55 -- common/autotest_common.sh@930 -- # kill -0 63228 00:25:52.529 16:02:55 -- common/autotest_common.sh@931 -- # uname 00:25:52.529 16:02:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:52.529 16:02:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63228 00:25:52.788 killing process with pid 63228 00:25:52.788 16:02:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:52.788 16:02:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:52.788 16:02:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63228' 00:25:52.788 16:02:55 -- common/autotest_common.sh@945 -- # kill 63228 00:25:52.788 16:02:55 -- common/autotest_common.sh@950 -- # wait 63228 00:25:52.788 16:02:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:52.788 16:02:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:52.788 16:02:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:52.788 16:02:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:52.788 16:02:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:52.788 16:02:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.788 16:02:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.788 16:02:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.788 16:02:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:53.047 ************************************ 00:25:53.047 END TEST nvmf_fio_target 00:25:53.047 ************************************ 00:25:53.047 00:25:53.047 real 0m19.539s 00:25:53.047 user 1m14.655s 00:25:53.047 sys 0m10.111s 00:25:53.047 16:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.047 16:02:55 -- common/autotest_common.sh@10 -- # set +x 00:25:53.047 16:02:55 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:53.047 16:02:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:53.047 16:02:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:53.047 16:02:55 -- common/autotest_common.sh@10 -- # set +x 00:25:53.047 ************************************ 00:25:53.047 START TEST nvmf_bdevio 00:25:53.047 ************************************ 00:25:53.047 16:02:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:53.047 * Looking for test storage... 00:25:53.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:53.047 16:02:55 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:53.047 16:02:55 -- nvmf/common.sh@7 -- # uname -s 00:25:53.047 16:02:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.047 16:02:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.047 16:02:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.047 16:02:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.047 16:02:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.047 16:02:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.047 16:02:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.047 16:02:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.047 16:02:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.047 16:02:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.047 16:02:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:53.047 16:02:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:53.047 16:02:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.047 16:02:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.047 16:02:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:53.047 16:02:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.047 16:02:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.047 16:02:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.047 16:02:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.047 16:02:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.047 16:02:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.047 16:02:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.047 16:02:55 -- paths/export.sh@5 -- # export PATH 00:25:53.047 16:02:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.047 16:02:55 -- nvmf/common.sh@46 -- # : 0 00:25:53.047 16:02:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:53.047 16:02:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:53.047 16:02:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:53.047 16:02:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.047 16:02:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.047 16:02:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:53.047 16:02:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:53.047 16:02:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:53.047 16:02:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.047 16:02:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.047 16:02:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:25:53.047 16:02:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:53.047 16:02:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.047 16:02:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:53.047 16:02:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:53.047 16:02:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:53.047 16:02:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.047 16:02:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.047 16:02:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.047 16:02:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:53.047 16:02:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:53.047 16:02:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:53.047 16:02:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:53.047 16:02:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:53.047 16:02:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:53.047 16:02:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.047 16:02:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.047 16:02:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:53.047 16:02:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:53.047 16:02:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:53.047 16:02:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:53.047 16:02:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:53.047 16:02:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.047 16:02:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:53.047 16:02:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:53.047 16:02:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:53.047 16:02:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:53.047 16:02:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:53.047 16:02:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:53.047 Cannot find device "nvmf_tgt_br" 00:25:53.047 16:02:55 -- nvmf/common.sh@154 -- # true 00:25:53.047 16:02:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:53.047 Cannot find device "nvmf_tgt_br2" 00:25:53.047 16:02:55 -- nvmf/common.sh@155 -- # true 00:25:53.047 16:02:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:53.047 16:02:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:53.047 Cannot find device "nvmf_tgt_br" 00:25:53.047 16:02:55 -- nvmf/common.sh@157 -- # true 00:25:53.047 16:02:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:53.047 Cannot find device "nvmf_tgt_br2" 00:25:53.047 16:02:55 -- nvmf/common.sh@158 -- # true 00:25:53.047 16:02:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:53.047 16:02:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:53.306 16:02:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.306 16:02:55 -- nvmf/common.sh@161 -- # true 00:25:53.306 16:02:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.306 16:02:55 -- nvmf/common.sh@162 -- # true 00:25:53.306 16:02:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:53.306 16:02:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:53.306 16:02:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:53.306 16:02:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:53.306 16:02:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:53.306 16:02:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:53.306 16:02:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:53.306 16:02:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:53.306 16:02:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:53.306 16:02:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:53.306 16:02:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:53.306 16:02:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:53.306 16:02:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:53.306 16:02:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:53.306 16:02:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:53.306 16:02:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:53.306 16:02:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:53.306 16:02:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:53.306 16:02:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:53.306 16:02:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:53.306 16:02:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:53.306 16:02:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:53.306 16:02:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:53.306 16:02:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:53.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:25:53.306 00:25:53.306 --- 10.0.0.2 ping statistics --- 00:25:53.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.306 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:53.306 16:02:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:53.306 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:53.306 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:25:53.306 00:25:53.306 --- 10.0.0.3 ping statistics --- 00:25:53.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.306 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:53.306 16:02:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:53.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:25:53.306 00:25:53.306 --- 10.0.0.1 ping statistics --- 00:25:53.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.306 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:53.306 16:02:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.306 16:02:56 -- nvmf/common.sh@421 -- # return 0 00:25:53.306 16:02:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:53.306 16:02:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.306 16:02:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:53.306 16:02:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:53.306 16:02:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.306 16:02:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:53.306 16:02:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:53.306 16:02:56 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:53.306 16:02:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:53.306 16:02:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:53.306 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:25:53.306 16:02:56 -- nvmf/common.sh@469 -- # nvmfpid=63931 00:25:53.306 16:02:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:25:53.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.306 16:02:56 -- nvmf/common.sh@470 -- # waitforlisten 63931 00:25:53.306 16:02:56 -- common/autotest_common.sh@819 -- # '[' -z 63931 ']' 00:25:53.306 16:02:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.306 16:02:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:53.306 16:02:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.306 16:02:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:53.306 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:25:53.565 [2024-07-22 16:02:56.184151] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:53.565 [2024-07-22 16:02:56.184249] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.565 [2024-07-22 16:02:56.327009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.565 [2024-07-22 16:02:56.396150] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:53.565 [2024-07-22 16:02:56.396564] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.565 [2024-07-22 16:02:56.396747] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.565 [2024-07-22 16:02:56.396922] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.565 [2024-07-22 16:02:56.397278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:53.565 [2024-07-22 16:02:56.397508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:53.565 [2024-07-22 16:02:56.397578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.565 [2024-07-22 16:02:56.397573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:54.501 16:02:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:54.501 16:02:57 -- common/autotest_common.sh@852 -- # return 0 00:25:54.501 16:02:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:54.501 16:02:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:54.501 16:02:57 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 16:02:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.501 16:02:57 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.501 16:02:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.501 16:02:57 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 [2024-07-22 16:02:57.245824] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.501 16:02:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.501 16:02:57 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:54.501 16:02:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.501 16:02:57 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 Malloc0 00:25:54.501 16:02:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.501 16:02:57 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:54.501 16:02:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.501 16:02:57 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 16:02:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.501 16:02:57 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:54.501 16:02:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.501 16:02:57 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 16:02:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.501 16:02:57 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.501 16:02:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.501 16:02:57 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 [2024-07-22 16:02:57.311943] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.501 16:02:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.501 16:02:57 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:25:54.501 16:02:57 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:54.501 16:02:57 -- nvmf/common.sh@520 -- # config=() 00:25:54.501 16:02:57 -- nvmf/common.sh@520 -- # local subsystem config 00:25:54.501 16:02:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.501 16:02:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.501 { 00:25:54.501 "params": { 00:25:54.501 "name": "Nvme$subsystem", 00:25:54.501 "trtype": "$TEST_TRANSPORT", 00:25:54.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.501 "adrfam": "ipv4", 00:25:54.501 "trsvcid": "$NVMF_PORT", 00:25:54.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.501 "hdgst": ${hdgst:-false}, 00:25:54.501 "ddgst": ${ddgst:-false} 00:25:54.501 }, 00:25:54.501 "method": "bdev_nvme_attach_controller" 00:25:54.501 } 00:25:54.501 EOF 00:25:54.501 )") 00:25:54.501 16:02:57 -- nvmf/common.sh@542 -- # cat 00:25:54.501 16:02:57 -- nvmf/common.sh@544 -- # jq . 00:25:54.501 16:02:57 -- nvmf/common.sh@545 -- # IFS=, 00:25:54.501 16:02:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:54.501 "params": { 00:25:54.501 "name": "Nvme1", 00:25:54.501 "trtype": "tcp", 00:25:54.501 "traddr": "10.0.0.2", 00:25:54.501 "adrfam": "ipv4", 00:25:54.501 "trsvcid": "4420", 00:25:54.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:54.501 "hdgst": false, 00:25:54.501 "ddgst": false 00:25:54.501 }, 00:25:54.501 "method": "bdev_nvme_attach_controller" 00:25:54.501 }' 00:25:54.760 [2024-07-22 16:02:57.369414] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:54.760 [2024-07-22 16:02:57.369532] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63967 ] 00:25:54.760 [2024-07-22 16:02:57.536628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:54.760 [2024-07-22 16:02:57.611125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.760 [2024-07-22 16:02:57.611253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.760 [2024-07-22 16:02:57.611257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.018 [2024-07-22 16:02:57.748947] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:55.018 [2024-07-22 16:02:57.749240] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:55.018 I/O targets: 00:25:55.018 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:55.018 00:25:55.018 00:25:55.018 CUnit - A unit testing framework for C - Version 2.1-3 00:25:55.018 http://cunit.sourceforge.net/ 00:25:55.018 00:25:55.018 00:25:55.018 Suite: bdevio tests on: Nvme1n1 00:25:55.018 Test: blockdev write read block ...passed 00:25:55.018 Test: blockdev write zeroes read block ...passed 00:25:55.018 Test: blockdev write zeroes read no split ...passed 00:25:55.018 Test: blockdev write zeroes read split ...passed 00:25:55.018 Test: blockdev write zeroes read split partial ...passed 00:25:55.018 Test: blockdev reset ...[2024-07-22 16:02:57.781223] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:55.018 [2024-07-22 16:02:57.781546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f59bb0 (9): Bad file descriptor 00:25:55.018 [2024-07-22 16:02:57.798294] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:55.018 passed 00:25:55.018 Test: blockdev write read 8 blocks ...passed 00:25:55.018 Test: blockdev write read size > 128k ...passed 00:25:55.018 Test: blockdev write read invalid size ...passed 00:25:55.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:55.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:55.018 Test: blockdev write read max offset ...passed 00:25:55.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:55.018 Test: blockdev writev readv 8 blocks ...passed 00:25:55.018 Test: blockdev writev readv 30 x 1block ...passed 00:25:55.018 Test: blockdev writev readv block ...passed 00:25:55.018 Test: blockdev writev readv size > 128k ...passed 00:25:55.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:55.018 Test: blockdev comparev and writev ...[2024-07-22 16:02:57.809724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:55.018 [2024-07-22 16:02:57.810064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.018 [2024-07-22 16:02:57.810219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:55.018 [2024-07-22 16:02:57.810328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:55.018 [2024-07-22 16:02:57.810869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:55.018 [2024-07-22 16:02:57.811158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:55.018 [2024-07-22 16:02:57.811411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:55.018 [2024-07-22 16:02:57.811759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:55.018 [2024-07-22 16:02:57.812298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:55.018 [2024-07-22 16:02:57.812553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:55.018 [2024-07-22 16:02:57.812817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:55.018 [2024-07-22 16:02:57.813063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:55.018 [2024-07-22 16:02:57.813630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:55.018 [2024-07-22 16:02:57.813750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:55.018 [2024-07-22 16:02:57.813854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:55.018 [2024-07-22 16:02:57.813943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:55.018 passed 00:25:55.018 Test: blockdev nvme passthru rw ...passed 00:25:55.018 Test: blockdev nvme passthru vendor specific ...[2024-07-22 16:02:57.815366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:55.018 [2024-07-22 16:02:57.815659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:55.018 [2024-07-22 16:02:57.815939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:55.018 [2024-07-22 16:02:57.816160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:55.018 [2024-07-22 16:02:57.816443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:55.018 [2024-07-22 16:02:57.816692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:55.018 passed 00:25:55.018 Test: blockdev nvme admin passthru ...[2024-07-22 16:02:57.817152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:55.018 [2024-07-22 16:02:57.817242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:55.018 passed 00:25:55.018 Test: blockdev copy ...passed 00:25:55.018 00:25:55.018 Run Summary: Type Total Ran Passed Failed Inactive 00:25:55.018 suites 1 1 n/a 0 0 00:25:55.018 tests 23 23 23 0 0 00:25:55.018 asserts 152 152 152 0 n/a 00:25:55.019 00:25:55.019 Elapsed time = 0.162 seconds 00:25:55.276 16:02:58 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.276 16:02:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.276 16:02:58 -- common/autotest_common.sh@10 -- # set +x 00:25:55.276 16:02:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.276 16:02:58 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:55.276 16:02:58 -- target/bdevio.sh@30 -- # nvmftestfini 00:25:55.276 16:02:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:55.276 16:02:58 -- nvmf/common.sh@116 -- # sync 00:25:55.276 16:02:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:55.276 16:02:58 -- nvmf/common.sh@119 -- # set +e 00:25:55.276 16:02:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:55.276 16:02:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:55.276 rmmod nvme_tcp 00:25:55.536 rmmod nvme_fabrics 00:25:55.536 rmmod nvme_keyring 00:25:55.536 16:02:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:55.536 16:02:58 -- nvmf/common.sh@123 -- # set -e 00:25:55.536 16:02:58 -- nvmf/common.sh@124 -- # return 0 00:25:55.536 16:02:58 -- nvmf/common.sh@477 -- # '[' -n 63931 ']' 00:25:55.536 16:02:58 -- nvmf/common.sh@478 -- # killprocess 63931 00:25:55.536 16:02:58 -- common/autotest_common.sh@926 -- # '[' -z 63931 ']' 00:25:55.536 16:02:58 -- common/autotest_common.sh@930 -- # kill -0 63931 00:25:55.536 16:02:58 -- common/autotest_common.sh@931 -- # uname 00:25:55.536 16:02:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:55.536 16:02:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63931 00:25:55.536 killing process with pid 63931 00:25:55.536 16:02:58 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:25:55.536 16:02:58 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:25:55.536 16:02:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63931' 00:25:55.536 16:02:58 -- common/autotest_common.sh@945 -- # kill 63931 00:25:55.536 16:02:58 -- common/autotest_common.sh@950 -- # wait 63931 00:25:55.795 16:02:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:55.795 16:02:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:55.795 16:02:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:55.795 16:02:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:55.795 16:02:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:55.795 16:02:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.795 16:02:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.795 16:02:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.795 16:02:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:55.795 00:25:55.795 real 0m2.756s 00:25:55.795 user 0m9.339s 00:25:55.795 sys 0m0.630s 00:25:55.795 ************************************ 00:25:55.795 END TEST nvmf_bdevio 00:25:55.795 ************************************ 00:25:55.795 16:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.795 16:02:58 -- common/autotest_common.sh@10 -- # set +x 00:25:55.795 16:02:58 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:25:55.795 16:02:58 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:55.795 16:02:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:55.795 16:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:55.795 16:02:58 -- common/autotest_common.sh@10 -- # set +x 00:25:55.795 ************************************ 00:25:55.795 START TEST nvmf_bdevio_no_huge 00:25:55.795 ************************************ 00:25:55.795 16:02:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:55.795 * Looking for test storage... 00:25:55.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:55.795 16:02:58 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:55.795 16:02:58 -- nvmf/common.sh@7 -- # uname -s 00:25:55.795 16:02:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.795 16:02:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.795 16:02:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.795 16:02:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.795 16:02:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.795 16:02:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.795 16:02:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.795 16:02:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.795 16:02:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.795 16:02:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.795 16:02:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:55.795 16:02:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:55.795 16:02:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.795 16:02:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.795 16:02:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:55.795 16:02:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:55.795 16:02:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.795 16:02:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.795 16:02:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.795 16:02:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.795 16:02:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.795 16:02:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.795 16:02:58 -- paths/export.sh@5 -- # export PATH 00:25:55.795 16:02:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.795 16:02:58 -- nvmf/common.sh@46 -- # : 0 00:25:55.795 16:02:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:55.795 16:02:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:55.795 16:02:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:55.795 16:02:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.795 16:02:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.795 16:02:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:55.795 16:02:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:55.795 16:02:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:55.795 16:02:58 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:55.795 16:02:58 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:55.795 16:02:58 -- target/bdevio.sh@14 -- # nvmftestinit 00:25:55.795 16:02:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:55.795 16:02:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.795 16:02:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:55.795 16:02:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:55.795 16:02:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:55.795 16:02:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.795 16:02:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.795 16:02:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.795 16:02:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:55.795 16:02:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:55.795 16:02:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:55.795 16:02:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:55.795 16:02:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:55.795 16:02:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:55.795 16:02:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.795 16:02:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.795 16:02:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:55.796 16:02:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:55.796 16:02:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:55.796 16:02:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:55.796 16:02:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:55.796 16:02:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.796 16:02:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:55.796 16:02:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:55.796 16:02:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:55.796 16:02:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:55.796 16:02:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:55.796 16:02:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:55.796 Cannot find device "nvmf_tgt_br" 00:25:55.796 16:02:58 -- nvmf/common.sh@154 -- # true 00:25:55.796 16:02:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:55.796 Cannot find device "nvmf_tgt_br2" 00:25:55.796 16:02:58 -- nvmf/common.sh@155 -- # true 00:25:55.796 16:02:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:56.055 16:02:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:56.055 Cannot find device "nvmf_tgt_br" 00:25:56.055 16:02:58 -- nvmf/common.sh@157 -- # true 00:25:56.055 16:02:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:56.055 Cannot find device "nvmf_tgt_br2" 00:25:56.055 16:02:58 -- nvmf/common.sh@158 -- # true 00:25:56.055 16:02:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:56.055 16:02:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:56.055 16:02:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:56.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:56.055 16:02:58 -- nvmf/common.sh@161 -- # true 00:25:56.055 16:02:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:56.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:56.055 16:02:58 -- nvmf/common.sh@162 -- # true 00:25:56.055 16:02:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:56.055 16:02:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:56.055 16:02:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:56.055 16:02:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:56.055 16:02:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:56.055 16:02:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:56.055 16:02:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:56.055 16:02:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:56.055 16:02:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:56.055 16:02:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:56.055 16:02:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:56.055 16:02:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:56.055 16:02:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:56.055 16:02:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:56.055 16:02:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:56.055 16:02:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:56.055 16:02:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:56.055 16:02:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:56.055 16:02:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:56.055 16:02:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:56.055 16:02:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:56.055 16:02:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:56.055 16:02:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:56.055 16:02:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:56.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:25:56.055 00:25:56.055 --- 10.0.0.2 ping statistics --- 00:25:56.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.055 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:56.055 16:02:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:56.314 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:56.314 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:25:56.314 00:25:56.314 --- 10.0.0.3 ping statistics --- 00:25:56.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.314 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:25:56.314 16:02:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:56.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:25:56.314 00:25:56.314 --- 10.0.0.1 ping statistics --- 00:25:56.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.314 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:56.314 16:02:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.314 16:02:58 -- nvmf/common.sh@421 -- # return 0 00:25:56.314 16:02:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:56.314 16:02:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.314 16:02:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:56.314 16:02:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:56.314 16:02:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.314 16:02:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:56.314 16:02:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:56.314 16:02:58 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:56.314 16:02:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:56.315 16:02:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:56.315 16:02:58 -- common/autotest_common.sh@10 -- # set +x 00:25:56.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.315 16:02:58 -- nvmf/common.sh@469 -- # nvmfpid=64134 00:25:56.315 16:02:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:56.315 16:02:58 -- nvmf/common.sh@470 -- # waitforlisten 64134 00:25:56.315 16:02:58 -- common/autotest_common.sh@819 -- # '[' -z 64134 ']' 00:25:56.315 16:02:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.315 16:02:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:56.315 16:02:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.315 16:02:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:56.315 16:02:58 -- common/autotest_common.sh@10 -- # set +x 00:25:56.315 [2024-07-22 16:02:59.006126] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:56.315 [2024-07-22 16:02:59.006442] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:56.315 [2024-07-22 16:02:59.158644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:56.573 [2024-07-22 16:02:59.296061] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:56.573 [2024-07-22 16:02:59.296238] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.574 [2024-07-22 16:02:59.296256] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.574 [2024-07-22 16:02:59.296267] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.574 [2024-07-22 16:02:59.296431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:56.574 [2024-07-22 16:02:59.296665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:56.574 [2024-07-22 16:02:59.296719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:56.574 [2024-07-22 16:02:59.296729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.509 16:03:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:57.509 16:03:00 -- common/autotest_common.sh@852 -- # return 0 00:25:57.509 16:03:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:57.509 16:03:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:57.509 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:25:57.509 16:03:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.509 16:03:00 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:57.509 16:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.509 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:25:57.509 [2024-07-22 16:03:00.112049] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.509 16:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.509 16:03:00 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:57.509 16:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.509 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:25:57.509 Malloc0 00:25:57.509 16:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.509 16:03:00 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:57.509 16:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.509 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:25:57.509 16:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.509 16:03:00 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.509 16:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.509 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:25:57.509 16:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.509 16:03:00 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.509 16:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.509 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:25:57.509 [2024-07-22 16:03:00.155899] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.509 16:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.509 16:03:00 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:57.509 16:03:00 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:57.509 16:03:00 -- nvmf/common.sh@520 -- # config=() 00:25:57.509 16:03:00 -- nvmf/common.sh@520 -- # local subsystem config 00:25:57.509 16:03:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:57.509 16:03:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:57.509 { 00:25:57.509 "params": { 00:25:57.509 "name": "Nvme$subsystem", 00:25:57.509 "trtype": "$TEST_TRANSPORT", 00:25:57.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.509 "adrfam": "ipv4", 00:25:57.509 "trsvcid": "$NVMF_PORT", 00:25:57.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.509 "hdgst": ${hdgst:-false}, 00:25:57.509 "ddgst": ${ddgst:-false} 00:25:57.509 }, 00:25:57.509 "method": "bdev_nvme_attach_controller" 00:25:57.509 } 00:25:57.509 EOF 00:25:57.509 )") 00:25:57.509 16:03:00 -- nvmf/common.sh@542 -- # cat 00:25:57.509 16:03:00 -- nvmf/common.sh@544 -- # jq . 00:25:57.509 16:03:00 -- nvmf/common.sh@545 -- # IFS=, 00:25:57.509 16:03:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:57.509 "params": { 00:25:57.509 "name": "Nvme1", 00:25:57.510 "trtype": "tcp", 00:25:57.510 "traddr": "10.0.0.2", 00:25:57.510 "adrfam": "ipv4", 00:25:57.510 "trsvcid": "4420", 00:25:57.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:57.510 "hdgst": false, 00:25:57.510 "ddgst": false 00:25:57.510 }, 00:25:57.510 "method": "bdev_nvme_attach_controller" 00:25:57.510 }' 00:25:57.510 [2024-07-22 16:03:00.212602] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:57.510 [2024-07-22 16:03:00.212696] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64176 ] 00:25:57.510 [2024-07-22 16:03:00.358392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:57.768 [2024-07-22 16:03:00.494615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.768 [2024-07-22 16:03:00.494735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.768 [2024-07-22 16:03:00.494742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.026 [2024-07-22 16:03:00.664877] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:58.026 [2024-07-22 16:03:00.665165] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:58.026 I/O targets: 00:25:58.026 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:58.026 00:25:58.026 00:25:58.026 CUnit - A unit testing framework for C - Version 2.1-3 00:25:58.026 http://cunit.sourceforge.net/ 00:25:58.026 00:25:58.026 00:25:58.026 Suite: bdevio tests on: Nvme1n1 00:25:58.026 Test: blockdev write read block ...passed 00:25:58.026 Test: blockdev write zeroes read block ...passed 00:25:58.026 Test: blockdev write zeroes read no split ...passed 00:25:58.026 Test: blockdev write zeroes read split ...passed 00:25:58.026 Test: blockdev write zeroes read split partial ...passed 00:25:58.026 Test: blockdev reset ...[2024-07-22 16:03:00.707974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:58.026 [2024-07-22 16:03:00.708299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fceb0 (9): Bad file descriptor 00:25:58.026 [2024-07-22 16:03:00.723529] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:58.026 passed 00:25:58.026 Test: blockdev write read 8 blocks ...passed 00:25:58.026 Test: blockdev write read size > 128k ...passed 00:25:58.026 Test: blockdev write read invalid size ...passed 00:25:58.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:58.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:58.026 Test: blockdev write read max offset ...passed 00:25:58.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:58.026 Test: blockdev writev readv 8 blocks ...passed 00:25:58.026 Test: blockdev writev readv 30 x 1block ...passed 00:25:58.026 Test: blockdev writev readv block ...passed 00:25:58.026 Test: blockdev writev readv size > 128k ...passed 00:25:58.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:58.026 Test: blockdev comparev and writev ...[2024-07-22 16:03:00.734650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:58.026 [2024-07-22 16:03:00.734713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.026 [2024-07-22 16:03:00.734736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:58.026 [2024-07-22 16:03:00.734747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.026 [2024-07-22 16:03:00.735094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:58.026 [2024-07-22 16:03:00.735112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.026 [2024-07-22 16:03:00.735130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:58.026 [2024-07-22 16:03:00.735140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.026 [2024-07-22 16:03:00.735466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:58.026 [2024-07-22 16:03:00.735483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.026 [2024-07-22 16:03:00.735528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:58.026 [2024-07-22 16:03:00.735538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.026 [2024-07-22 16:03:00.736066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:58.026 [2024-07-22 16:03:00.736098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.026 [2024-07-22 16:03:00.736118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:58.026 [2024-07-22 16:03:00.736129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.026 passed 00:25:58.026 Test: blockdev nvme passthru rw ...passed 00:25:58.026 Test: blockdev nvme passthru vendor specific ...[2024-07-22 16:03:00.737661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:58.026 [2024-07-22 16:03:00.737689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.026 passed 00:25:58.026 Test: blockdev nvme admin passthru ...[2024-07-22 16:03:00.738005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:58.026 [2024-07-22 16:03:00.738029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.026 [2024-07-22 16:03:00.738153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:58.026 [2024-07-22 16:03:00.738170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.026 [2024-07-22 16:03:00.738298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:58.026 [2024-07-22 16:03:00.738314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.026 passed 00:25:58.026 Test: blockdev copy ...passed 00:25:58.026 00:25:58.026 Run Summary: Type Total Ran Passed Failed Inactive 00:25:58.026 suites 1 1 n/a 0 0 00:25:58.026 tests 23 23 23 0 0 00:25:58.026 asserts 152 152 152 0 n/a 00:25:58.026 00:25:58.026 Elapsed time = 0.190 seconds 00:25:58.284 16:03:01 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.284 16:03:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.284 16:03:01 -- common/autotest_common.sh@10 -- # set +x 00:25:58.284 16:03:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.284 16:03:01 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:58.284 16:03:01 -- target/bdevio.sh@30 -- # nvmftestfini 00:25:58.284 16:03:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:58.284 16:03:01 -- nvmf/common.sh@116 -- # sync 00:25:58.543 16:03:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:58.543 16:03:01 -- nvmf/common.sh@119 -- # set +e 00:25:58.543 16:03:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:58.543 16:03:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:58.543 rmmod nvme_tcp 00:25:58.543 rmmod nvme_fabrics 00:25:58.543 rmmod nvme_keyring 00:25:58.543 16:03:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:58.543 16:03:01 -- nvmf/common.sh@123 -- # set -e 00:25:58.543 16:03:01 -- nvmf/common.sh@124 -- # return 0 00:25:58.543 16:03:01 -- nvmf/common.sh@477 -- # '[' -n 64134 ']' 00:25:58.543 16:03:01 -- nvmf/common.sh@478 -- # killprocess 64134 00:25:58.543 16:03:01 -- common/autotest_common.sh@926 -- # '[' -z 64134 ']' 00:25:58.543 16:03:01 -- common/autotest_common.sh@930 -- # kill -0 64134 00:25:58.543 16:03:01 -- common/autotest_common.sh@931 -- # uname 00:25:58.543 16:03:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:58.543 16:03:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64134 00:25:58.543 killing process with pid 64134 00:25:58.543 16:03:01 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:25:58.543 16:03:01 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:25:58.543 16:03:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64134' 00:25:58.543 16:03:01 -- common/autotest_common.sh@945 -- # kill 64134 00:25:58.543 16:03:01 -- common/autotest_common.sh@950 -- # wait 64134 00:25:58.800 16:03:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:58.800 16:03:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:58.800 16:03:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:58.800 16:03:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.800 16:03:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:58.800 16:03:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.800 16:03:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:58.801 16:03:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.801 16:03:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:59.059 ************************************ 00:25:59.059 END TEST nvmf_bdevio_no_huge 00:25:59.059 ************************************ 00:25:59.059 00:25:59.059 real 0m3.150s 00:25:59.059 user 0m10.440s 00:25:59.059 sys 0m1.195s 00:25:59.059 16:03:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:59.059 16:03:01 -- common/autotest_common.sh@10 -- # set +x 00:25:59.059 16:03:01 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:59.059 16:03:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:59.059 16:03:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:59.059 16:03:01 -- common/autotest_common.sh@10 -- # set +x 00:25:59.059 ************************************ 00:25:59.059 START TEST nvmf_tls 00:25:59.059 ************************************ 00:25:59.059 16:03:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:59.059 * Looking for test storage... 00:25:59.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:59.059 16:03:01 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:59.059 16:03:01 -- nvmf/common.sh@7 -- # uname -s 00:25:59.059 16:03:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.059 16:03:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.059 16:03:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.059 16:03:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.059 16:03:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.059 16:03:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.059 16:03:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.059 16:03:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.059 16:03:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.059 16:03:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.059 16:03:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:59.059 16:03:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:25:59.059 16:03:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.059 16:03:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.059 16:03:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:59.059 16:03:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.059 16:03:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.059 16:03:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.060 16:03:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.060 16:03:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.060 16:03:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.060 16:03:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.060 16:03:01 -- paths/export.sh@5 -- # export PATH 00:25:59.060 16:03:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.060 16:03:01 -- nvmf/common.sh@46 -- # : 0 00:25:59.060 16:03:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:59.060 16:03:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:59.060 16:03:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:59.060 16:03:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.060 16:03:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.060 16:03:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:59.060 16:03:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:59.060 16:03:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:59.060 16:03:01 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:59.060 16:03:01 -- target/tls.sh@71 -- # nvmftestinit 00:25:59.060 16:03:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:59.060 16:03:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.060 16:03:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:59.060 16:03:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:59.060 16:03:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:59.060 16:03:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.060 16:03:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.060 16:03:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.060 16:03:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:59.060 16:03:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:59.060 16:03:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:59.060 16:03:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:59.060 16:03:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:59.060 16:03:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:59.060 16:03:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.060 16:03:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.060 16:03:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:59.060 16:03:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:59.060 16:03:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:59.060 16:03:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:59.060 16:03:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:59.060 16:03:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.060 16:03:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:59.060 16:03:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:59.060 16:03:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:59.060 16:03:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:59.060 16:03:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:59.060 16:03:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:59.060 Cannot find device "nvmf_tgt_br" 00:25:59.060 16:03:01 -- nvmf/common.sh@154 -- # true 00:25:59.060 16:03:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:59.060 Cannot find device "nvmf_tgt_br2" 00:25:59.060 16:03:01 -- nvmf/common.sh@155 -- # true 00:25:59.060 16:03:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:59.060 16:03:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:59.060 Cannot find device "nvmf_tgt_br" 00:25:59.060 16:03:01 -- nvmf/common.sh@157 -- # true 00:25:59.060 16:03:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:59.060 Cannot find device "nvmf_tgt_br2" 00:25:59.060 16:03:01 -- nvmf/common.sh@158 -- # true 00:25:59.060 16:03:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:59.060 16:03:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:59.060 16:03:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:59.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:59.319 16:03:01 -- nvmf/common.sh@161 -- # true 00:25:59.319 16:03:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:59.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:59.319 16:03:01 -- nvmf/common.sh@162 -- # true 00:25:59.319 16:03:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:59.319 16:03:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:59.319 16:03:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:59.319 16:03:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:59.319 16:03:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:59.319 16:03:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:59.319 16:03:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:59.319 16:03:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:59.319 16:03:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:59.319 16:03:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:59.319 16:03:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:59.319 16:03:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:59.319 16:03:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:59.319 16:03:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:59.319 16:03:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:59.319 16:03:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:59.319 16:03:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:59.319 16:03:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:59.319 16:03:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:59.319 16:03:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:59.319 16:03:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:59.319 16:03:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:59.319 16:03:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:59.319 16:03:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:59.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:25:59.319 00:25:59.319 --- 10.0.0.2 ping statistics --- 00:25:59.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.319 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:59.319 16:03:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:59.319 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:59.319 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:25:59.319 00:25:59.319 --- 10.0.0.3 ping statistics --- 00:25:59.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.319 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:59.319 16:03:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:59.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:59.319 00:25:59.319 --- 10.0.0.1 ping statistics --- 00:25:59.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.319 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:59.319 16:03:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.319 16:03:02 -- nvmf/common.sh@421 -- # return 0 00:25:59.319 16:03:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:59.319 16:03:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.319 16:03:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:59.319 16:03:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:59.319 16:03:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.319 16:03:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:59.319 16:03:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:59.319 16:03:02 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:59.319 16:03:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:59.319 16:03:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:59.319 16:03:02 -- common/autotest_common.sh@10 -- # set +x 00:25:59.319 16:03:02 -- nvmf/common.sh@469 -- # nvmfpid=64359 00:25:59.319 16:03:02 -- nvmf/common.sh@470 -- # waitforlisten 64359 00:25:59.319 16:03:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:59.319 16:03:02 -- common/autotest_common.sh@819 -- # '[' -z 64359 ']' 00:25:59.319 16:03:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.319 16:03:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:59.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.319 16:03:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.319 16:03:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:59.319 16:03:02 -- common/autotest_common.sh@10 -- # set +x 00:25:59.577 [2024-07-22 16:03:02.209538] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:59.577 [2024-07-22 16:03:02.209658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.577 [2024-07-22 16:03:02.359459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.577 [2024-07-22 16:03:02.431028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:59.577 [2024-07-22 16:03:02.431200] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.577 [2024-07-22 16:03:02.431216] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.577 [2024-07-22 16:03:02.431228] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.577 [2024-07-22 16:03:02.431265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.511 16:03:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:00.511 16:03:03 -- common/autotest_common.sh@852 -- # return 0 00:26:00.511 16:03:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:00.511 16:03:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:00.511 16:03:03 -- common/autotest_common.sh@10 -- # set +x 00:26:00.511 16:03:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.511 16:03:03 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:26:00.511 16:03:03 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:26:00.770 true 00:26:00.770 16:03:03 -- target/tls.sh@82 -- # jq -r .tls_version 00:26:00.770 16:03:03 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:01.029 16:03:03 -- target/tls.sh@82 -- # version=0 00:26:01.029 16:03:03 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:26:01.029 16:03:03 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:26:01.288 16:03:04 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:01.288 16:03:04 -- target/tls.sh@90 -- # jq -r .tls_version 00:26:01.546 16:03:04 -- target/tls.sh@90 -- # version=13 00:26:01.546 16:03:04 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:26:01.546 16:03:04 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:26:01.804 16:03:04 -- target/tls.sh@98 -- # jq -r .tls_version 00:26:01.804 16:03:04 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:02.062 16:03:04 -- target/tls.sh@98 -- # version=7 00:26:02.062 16:03:04 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:26:02.062 16:03:04 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:02.062 16:03:04 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:26:02.321 16:03:05 -- target/tls.sh@105 -- # ktls=false 00:26:02.321 16:03:05 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:26:02.321 16:03:05 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:26:02.886 16:03:05 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:02.886 16:03:05 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:26:03.144 16:03:05 -- target/tls.sh@113 -- # ktls=true 00:26:03.144 16:03:05 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:26:03.144 16:03:05 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:26:03.402 16:03:06 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:03.402 16:03:06 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:26:03.661 16:03:06 -- target/tls.sh@121 -- # ktls=false 00:26:03.661 16:03:06 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:26:03.661 16:03:06 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:26:03.661 16:03:06 -- target/tls.sh@49 -- # local key hash crc 00:26:03.661 16:03:06 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:26:03.661 16:03:06 -- target/tls.sh@51 -- # hash=01 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # gzip -1 -c 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # tail -c8 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # head -c 4 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # crc='p$H�' 00:26:03.661 16:03:06 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:26:03.661 16:03:06 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:26:03.661 16:03:06 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:03.661 16:03:06 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:03.661 16:03:06 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:26:03.661 16:03:06 -- target/tls.sh@49 -- # local key hash crc 00:26:03.661 16:03:06 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:26:03.661 16:03:06 -- target/tls.sh@51 -- # hash=01 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # gzip -1 -c 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # tail -c8 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # head -c 4 00:26:03.661 16:03:06 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:26:03.661 16:03:06 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:26:03.661 16:03:06 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:26:03.661 16:03:06 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:26:03.661 16:03:06 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:26:03.661 16:03:06 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:03.661 16:03:06 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:26:03.661 16:03:06 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:03.661 16:03:06 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:26:03.661 16:03:06 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:03.661 16:03:06 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:26:03.661 16:03:06 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:26:03.919 16:03:06 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:26:04.177 16:03:06 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:04.177 16:03:06 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:04.177 16:03:06 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:04.436 [2024-07-22 16:03:07.203145] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.436 16:03:07 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:04.694 16:03:07 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:04.952 [2024-07-22 16:03:07.679294] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:04.952 [2024-07-22 16:03:07.679546] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.952 16:03:07 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:05.220 malloc0 00:26:05.220 16:03:07 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:05.504 16:03:08 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:05.762 16:03:08 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:17.964 Initializing NVMe Controllers 00:26:17.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:17.964 Initialization complete. Launching workers. 00:26:17.964 ======================================================== 00:26:17.964 Latency(us) 00:26:17.964 Device Information : IOPS MiB/s Average min max 00:26:17.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9452.99 36.93 6771.51 1517.51 9761.33 00:26:17.964 ======================================================== 00:26:17.964 Total : 9452.99 36.93 6771.51 1517.51 9761.33 00:26:17.964 00:26:17.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.964 16:03:18 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:17.964 16:03:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:17.964 16:03:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:17.964 16:03:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:17.964 16:03:18 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:26:17.964 16:03:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:17.964 16:03:18 -- target/tls.sh@28 -- # bdevperf_pid=64606 00:26:17.964 16:03:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:17.964 16:03:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:17.964 16:03:18 -- target/tls.sh@31 -- # waitforlisten 64606 /var/tmp/bdevperf.sock 00:26:17.964 16:03:18 -- common/autotest_common.sh@819 -- # '[' -z 64606 ']' 00:26:17.964 16:03:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.964 16:03:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:17.964 16:03:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.964 16:03:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:17.964 16:03:18 -- common/autotest_common.sh@10 -- # set +x 00:26:17.964 [2024-07-22 16:03:18.803321] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:17.964 [2024-07-22 16:03:18.803438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64606 ] 00:26:17.964 [2024-07-22 16:03:18.944954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.964 [2024-07-22 16:03:19.012623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.964 16:03:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:17.964 16:03:19 -- common/autotest_common.sh@852 -- # return 0 00:26:17.964 16:03:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:17.964 [2024-07-22 16:03:19.440807] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:17.964 TLSTESTn1 00:26:17.964 16:03:19 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:17.964 Running I/O for 10 seconds... 00:26:27.938 00:26:27.938 Latency(us) 00:26:27.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.938 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:27.938 Verification LBA range: start 0x0 length 0x2000 00:26:27.938 TLSTESTn1 : 10.02 5393.64 21.07 0.00 0.00 23691.67 5779.08 30146.56 00:26:27.938 =================================================================================================================== 00:26:27.938 Total : 5393.64 21.07 0.00 0.00 23691.67 5779.08 30146.56 00:26:27.938 0 00:26:27.938 16:03:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:27.938 16:03:29 -- target/tls.sh@45 -- # killprocess 64606 00:26:27.938 16:03:29 -- common/autotest_common.sh@926 -- # '[' -z 64606 ']' 00:26:27.938 16:03:29 -- common/autotest_common.sh@930 -- # kill -0 64606 00:26:27.938 16:03:29 -- common/autotest_common.sh@931 -- # uname 00:26:27.938 16:03:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:27.938 16:03:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64606 00:26:27.938 killing process with pid 64606 00:26:27.938 Received shutdown signal, test time was about 10.000000 seconds 00:26:27.938 00:26:27.938 Latency(us) 00:26:27.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.938 =================================================================================================================== 00:26:27.938 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.938 16:03:29 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:27.938 16:03:29 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:27.938 16:03:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64606' 00:26:27.938 16:03:29 -- common/autotest_common.sh@945 -- # kill 64606 00:26:27.938 16:03:29 -- common/autotest_common.sh@950 -- # wait 64606 00:26:27.938 16:03:29 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:26:27.938 16:03:29 -- common/autotest_common.sh@640 -- # local es=0 00:26:27.938 16:03:29 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:26:27.938 16:03:29 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:26:27.938 16:03:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:27.938 16:03:29 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:26:27.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:27.938 16:03:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:27.938 16:03:29 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:26:27.938 16:03:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:27.938 16:03:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:27.938 16:03:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:27.938 16:03:29 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:26:27.938 16:03:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:27.938 16:03:29 -- target/tls.sh@28 -- # bdevperf_pid=64732 00:26:27.938 16:03:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:27.938 16:03:29 -- target/tls.sh@31 -- # waitforlisten 64732 /var/tmp/bdevperf.sock 00:26:27.938 16:03:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:27.938 16:03:29 -- common/autotest_common.sh@819 -- # '[' -z 64732 ']' 00:26:27.938 16:03:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:27.938 16:03:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:27.938 16:03:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:27.938 16:03:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:27.938 16:03:29 -- common/autotest_common.sh@10 -- # set +x 00:26:27.938 [2024-07-22 16:03:30.001053] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:27.938 [2024-07-22 16:03:30.001433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64732 ] 00:26:27.938 [2024-07-22 16:03:30.147874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.938 [2024-07-22 16:03:30.209867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.196 16:03:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.196 16:03:31 -- common/autotest_common.sh@852 -- # return 0 00:26:28.196 16:03:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:26:28.455 [2024-07-22 16:03:31.275924] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:28.455 [2024-07-22 16:03:31.284944] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:28.455 [2024-07-22 16:03:31.285804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed67c0 (107): Transport endpoint is not connected 00:26:28.455 [2024-07-22 16:03:31.286794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed67c0 (9): Bad file descriptor 00:26:28.455 [2024-07-22 16:03:31.287790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.455 [2024-07-22 16:03:31.287959] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:28.455 [2024-07-22 16:03:31.288070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.455 request: 00:26:28.455 { 00:26:28.455 "name": "TLSTEST", 00:26:28.455 "trtype": "tcp", 00:26:28.455 "traddr": "10.0.0.2", 00:26:28.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:28.455 "adrfam": "ipv4", 00:26:28.455 "trsvcid": "4420", 00:26:28.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.455 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:26:28.455 "method": "bdev_nvme_attach_controller", 00:26:28.455 "req_id": 1 00:26:28.455 } 00:26:28.455 Got JSON-RPC error response 00:26:28.455 response: 00:26:28.455 { 00:26:28.455 "code": -32602, 00:26:28.455 "message": "Invalid parameters" 00:26:28.455 } 00:26:28.455 16:03:31 -- target/tls.sh@36 -- # killprocess 64732 00:26:28.455 16:03:31 -- common/autotest_common.sh@926 -- # '[' -z 64732 ']' 00:26:28.455 16:03:31 -- common/autotest_common.sh@930 -- # kill -0 64732 00:26:28.455 16:03:31 -- common/autotest_common.sh@931 -- # uname 00:26:28.455 16:03:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:28.741 16:03:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64732 00:26:28.741 killing process with pid 64732 00:26:28.741 Received shutdown signal, test time was about 10.000000 seconds 00:26:28.741 00:26:28.741 Latency(us) 00:26:28.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.741 =================================================================================================================== 00:26:28.741 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:28.742 16:03:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:28.742 16:03:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:28.742 16:03:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64732' 00:26:28.742 16:03:31 -- common/autotest_common.sh@945 -- # kill 64732 00:26:28.742 16:03:31 -- common/autotest_common.sh@950 -- # wait 64732 00:26:28.742 16:03:31 -- target/tls.sh@37 -- # return 1 00:26:28.742 16:03:31 -- common/autotest_common.sh@643 -- # es=1 00:26:28.742 16:03:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:28.742 16:03:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:28.742 16:03:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:28.742 16:03:31 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:28.742 16:03:31 -- common/autotest_common.sh@640 -- # local es=0 00:26:28.742 16:03:31 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:28.742 16:03:31 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:26:28.742 16:03:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:28.742 16:03:31 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:26:28.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.742 16:03:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:28.742 16:03:31 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:28.742 16:03:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:28.742 16:03:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:28.742 16:03:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:26:28.742 16:03:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:26:28.742 16:03:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:28.742 16:03:31 -- target/tls.sh@28 -- # bdevperf_pid=64763 00:26:28.742 16:03:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:28.742 16:03:31 -- target/tls.sh@31 -- # waitforlisten 64763 /var/tmp/bdevperf.sock 00:26:28.742 16:03:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:28.742 16:03:31 -- common/autotest_common.sh@819 -- # '[' -z 64763 ']' 00:26:28.742 16:03:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.742 16:03:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.742 16:03:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.742 16:03:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.742 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:26:28.742 [2024-07-22 16:03:31.568656] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:28.742 [2024-07-22 16:03:31.568973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64763 ] 00:26:29.000 [2024-07-22 16:03:31.709397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.000 [2024-07-22 16:03:31.768129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.936 16:03:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.936 16:03:32 -- common/autotest_common.sh@852 -- # return 0 00:26:29.936 16:03:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:30.195 [2024-07-22 16:03:32.805216] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:30.196 [2024-07-22 16:03:32.814980] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:26:30.196 [2024-07-22 16:03:32.815219] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:26:30.196 [2024-07-22 16:03:32.815441] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:30.196 [2024-07-22 16:03:32.816156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec47c0 (107): Transport endpoint is not connected 00:26:30.196 [2024-07-22 16:03:32.817147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec47c0 (9): Bad file descriptor 00:26:30.196 [2024-07-22 16:03:32.818143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.196 [2024-07-22 16:03:32.818313] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:30.196 [2024-07-22 16:03:32.818427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.196 request: 00:26:30.196 { 00:26:30.196 "name": "TLSTEST", 00:26:30.196 "trtype": "tcp", 00:26:30.196 "traddr": "10.0.0.2", 00:26:30.196 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:30.196 "adrfam": "ipv4", 00:26:30.196 "trsvcid": "4420", 00:26:30.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.196 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:26:30.196 "method": "bdev_nvme_attach_controller", 00:26:30.196 "req_id": 1 00:26:30.196 } 00:26:30.196 Got JSON-RPC error response 00:26:30.196 response: 00:26:30.196 { 00:26:30.196 "code": -32602, 00:26:30.196 "message": "Invalid parameters" 00:26:30.196 } 00:26:30.196 16:03:32 -- target/tls.sh@36 -- # killprocess 64763 00:26:30.196 16:03:32 -- common/autotest_common.sh@926 -- # '[' -z 64763 ']' 00:26:30.196 16:03:32 -- common/autotest_common.sh@930 -- # kill -0 64763 00:26:30.196 16:03:32 -- common/autotest_common.sh@931 -- # uname 00:26:30.196 16:03:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:30.196 16:03:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64763 00:26:30.196 16:03:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:30.196 16:03:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:30.196 16:03:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64763' 00:26:30.196 killing process with pid 64763 00:26:30.196 16:03:32 -- common/autotest_common.sh@945 -- # kill 64763 00:26:30.196 Received shutdown signal, test time was about 10.000000 seconds 00:26:30.196 00:26:30.196 Latency(us) 00:26:30.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.196 =================================================================================================================== 00:26:30.196 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:30.196 16:03:32 -- common/autotest_common.sh@950 -- # wait 64763 00:26:30.196 16:03:33 -- target/tls.sh@37 -- # return 1 00:26:30.196 16:03:33 -- common/autotest_common.sh@643 -- # es=1 00:26:30.196 16:03:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:30.196 16:03:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:30.196 16:03:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:30.196 16:03:33 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:30.196 16:03:33 -- common/autotest_common.sh@640 -- # local es=0 00:26:30.196 16:03:33 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:30.196 16:03:33 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:26:30.196 16:03:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:30.196 16:03:33 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:26:30.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:30.455 16:03:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:30.455 16:03:33 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:30.455 16:03:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:30.455 16:03:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:26:30.456 16:03:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:30.456 16:03:33 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:26:30.456 16:03:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:30.456 16:03:33 -- target/tls.sh@28 -- # bdevperf_pid=64787 00:26:30.456 16:03:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:30.456 16:03:33 -- target/tls.sh@31 -- # waitforlisten 64787 /var/tmp/bdevperf.sock 00:26:30.456 16:03:33 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:30.456 16:03:33 -- common/autotest_common.sh@819 -- # '[' -z 64787 ']' 00:26:30.456 16:03:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:30.456 16:03:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:30.456 16:03:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:30.456 16:03:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:30.456 16:03:33 -- common/autotest_common.sh@10 -- # set +x 00:26:30.456 [2024-07-22 16:03:33.099726] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:30.456 [2024-07-22 16:03:33.100000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64787 ] 00:26:30.456 [2024-07-22 16:03:33.232942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.456 [2024-07-22 16:03:33.291093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.391 16:03:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:31.391 16:03:34 -- common/autotest_common.sh@852 -- # return 0 00:26:31.391 16:03:34 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:26:31.649 [2024-07-22 16:03:34.315048] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:31.649 [2024-07-22 16:03:34.325118] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:26:31.649 [2024-07-22 16:03:34.325329] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:26:31.649 [2024-07-22 16:03:34.325533] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:31.649 [2024-07-22 16:03:34.325882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7257c0 (107): Transport endpoint is not connected 00:26:31.649 [2024-07-22 16:03:34.326873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7257c0 (9): Bad file descriptor 00:26:31.649 [2024-07-22 16:03:34.327868] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:31.649 [2024-07-22 16:03:34.328027] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:31.649 [2024-07-22 16:03:34.328144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:31.649 request: 00:26:31.649 { 00:26:31.649 "name": "TLSTEST", 00:26:31.649 "trtype": "tcp", 00:26:31.649 "traddr": "10.0.0.2", 00:26:31.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:31.649 "adrfam": "ipv4", 00:26:31.649 "trsvcid": "4420", 00:26:31.649 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:31.649 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:26:31.649 "method": "bdev_nvme_attach_controller", 00:26:31.649 "req_id": 1 00:26:31.649 } 00:26:31.649 Got JSON-RPC error response 00:26:31.649 response: 00:26:31.649 { 00:26:31.649 "code": -32602, 00:26:31.649 "message": "Invalid parameters" 00:26:31.649 } 00:26:31.649 16:03:34 -- target/tls.sh@36 -- # killprocess 64787 00:26:31.650 16:03:34 -- common/autotest_common.sh@926 -- # '[' -z 64787 ']' 00:26:31.650 16:03:34 -- common/autotest_common.sh@930 -- # kill -0 64787 00:26:31.650 16:03:34 -- common/autotest_common.sh@931 -- # uname 00:26:31.650 16:03:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:31.650 16:03:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64787 00:26:31.650 killing process with pid 64787 00:26:31.650 Received shutdown signal, test time was about 10.000000 seconds 00:26:31.650 00:26:31.650 Latency(us) 00:26:31.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.650 =================================================================================================================== 00:26:31.650 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:31.650 16:03:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:31.650 16:03:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:31.650 16:03:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64787' 00:26:31.650 16:03:34 -- common/autotest_common.sh@945 -- # kill 64787 00:26:31.650 16:03:34 -- common/autotest_common.sh@950 -- # wait 64787 00:26:31.909 16:03:34 -- target/tls.sh@37 -- # return 1 00:26:31.909 16:03:34 -- common/autotest_common.sh@643 -- # es=1 00:26:31.909 16:03:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:31.909 16:03:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:31.909 16:03:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:31.909 16:03:34 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:31.909 16:03:34 -- common/autotest_common.sh@640 -- # local es=0 00:26:31.909 16:03:34 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:31.909 16:03:34 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:26:31.909 16:03:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:31.909 16:03:34 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:26:31.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:31.909 16:03:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:31.909 16:03:34 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:31.909 16:03:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:31.909 16:03:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:31.909 16:03:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:31.909 16:03:34 -- target/tls.sh@23 -- # psk= 00:26:31.909 16:03:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:31.909 16:03:34 -- target/tls.sh@28 -- # bdevperf_pid=64815 00:26:31.909 16:03:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:31.909 16:03:34 -- target/tls.sh@31 -- # waitforlisten 64815 /var/tmp/bdevperf.sock 00:26:31.909 16:03:34 -- common/autotest_common.sh@819 -- # '[' -z 64815 ']' 00:26:31.909 16:03:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:31.909 16:03:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:31.909 16:03:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:31.909 16:03:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:31.909 16:03:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:31.909 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:26:31.909 [2024-07-22 16:03:34.616411] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:31.909 [2024-07-22 16:03:34.616742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64815 ] 00:26:31.909 [2024-07-22 16:03:34.757220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.169 [2024-07-22 16:03:34.814999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.107 16:03:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:33.107 16:03:35 -- common/autotest_common.sh@852 -- # return 0 00:26:33.107 16:03:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:33.107 [2024-07-22 16:03:35.855470] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:33.107 [2024-07-22 16:03:35.857245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d2000 (9): Bad file descriptor 00:26:33.107 [2024-07-22 16:03:35.858241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.107 [2024-07-22 16:03:35.858399] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:33.107 [2024-07-22 16:03:35.858525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.107 request: 00:26:33.107 { 00:26:33.107 "name": "TLSTEST", 00:26:33.107 "trtype": "tcp", 00:26:33.107 "traddr": "10.0.0.2", 00:26:33.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:33.107 "adrfam": "ipv4", 00:26:33.107 "trsvcid": "4420", 00:26:33.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.107 "method": "bdev_nvme_attach_controller", 00:26:33.107 "req_id": 1 00:26:33.107 } 00:26:33.107 Got JSON-RPC error response 00:26:33.107 response: 00:26:33.107 { 00:26:33.107 "code": -32602, 00:26:33.107 "message": "Invalid parameters" 00:26:33.107 } 00:26:33.107 16:03:35 -- target/tls.sh@36 -- # killprocess 64815 00:26:33.107 16:03:35 -- common/autotest_common.sh@926 -- # '[' -z 64815 ']' 00:26:33.107 16:03:35 -- common/autotest_common.sh@930 -- # kill -0 64815 00:26:33.107 16:03:35 -- common/autotest_common.sh@931 -- # uname 00:26:33.107 16:03:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:33.107 16:03:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64815 00:26:33.107 killing process with pid 64815 00:26:33.107 Received shutdown signal, test time was about 10.000000 seconds 00:26:33.107 00:26:33.107 Latency(us) 00:26:33.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.107 =================================================================================================================== 00:26:33.107 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:33.107 16:03:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:33.107 16:03:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:33.107 16:03:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64815' 00:26:33.107 16:03:35 -- common/autotest_common.sh@945 -- # kill 64815 00:26:33.107 16:03:35 -- common/autotest_common.sh@950 -- # wait 64815 00:26:33.370 16:03:36 -- target/tls.sh@37 -- # return 1 00:26:33.370 16:03:36 -- common/autotest_common.sh@643 -- # es=1 00:26:33.370 16:03:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:33.370 16:03:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:33.370 16:03:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:33.370 16:03:36 -- target/tls.sh@167 -- # killprocess 64359 00:26:33.370 16:03:36 -- common/autotest_common.sh@926 -- # '[' -z 64359 ']' 00:26:33.370 16:03:36 -- common/autotest_common.sh@930 -- # kill -0 64359 00:26:33.370 16:03:36 -- common/autotest_common.sh@931 -- # uname 00:26:33.370 16:03:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:33.370 16:03:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64359 00:26:33.370 killing process with pid 64359 00:26:33.370 16:03:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:33.370 16:03:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:33.370 16:03:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64359' 00:26:33.370 16:03:36 -- common/autotest_common.sh@945 -- # kill 64359 00:26:33.370 16:03:36 -- common/autotest_common.sh@950 -- # wait 64359 00:26:33.631 16:03:36 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:26:33.631 16:03:36 -- target/tls.sh@49 -- # local key hash crc 00:26:33.631 16:03:36 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:26:33.631 16:03:36 -- target/tls.sh@51 -- # hash=02 00:26:33.631 16:03:36 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:26:33.631 16:03:36 -- target/tls.sh@52 -- # gzip -1 -c 00:26:33.631 16:03:36 -- target/tls.sh@52 -- # head -c 4 00:26:33.631 16:03:36 -- target/tls.sh@52 -- # tail -c8 00:26:33.631 16:03:36 -- target/tls.sh@52 -- # crc='�e�'\''' 00:26:33.631 16:03:36 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:26:33.631 16:03:36 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:26:33.631 16:03:36 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:26:33.631 16:03:36 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:26:33.631 16:03:36 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:33.631 16:03:36 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:26:33.631 16:03:36 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:33.631 16:03:36 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:26:33.631 16:03:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:33.631 16:03:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:33.631 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:26:33.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.631 16:03:36 -- nvmf/common.sh@469 -- # nvmfpid=64857 00:26:33.631 16:03:36 -- nvmf/common.sh@470 -- # waitforlisten 64857 00:26:33.631 16:03:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:33.631 16:03:36 -- common/autotest_common.sh@819 -- # '[' -z 64857 ']' 00:26:33.631 16:03:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.631 16:03:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:33.631 16:03:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.631 16:03:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:33.631 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:26:33.631 [2024-07-22 16:03:36.416502] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:33.631 [2024-07-22 16:03:36.416654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.889 [2024-07-22 16:03:36.561034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.889 [2024-07-22 16:03:36.617202] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:33.889 [2024-07-22 16:03:36.617334] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.889 [2024-07-22 16:03:36.617348] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.889 [2024-07-22 16:03:36.617356] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.889 [2024-07-22 16:03:36.617385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.824 16:03:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:34.824 16:03:37 -- common/autotest_common.sh@852 -- # return 0 00:26:34.824 16:03:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:34.824 16:03:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:34.824 16:03:37 -- common/autotest_common.sh@10 -- # set +x 00:26:34.824 16:03:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.824 16:03:37 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:34.824 16:03:37 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:34.824 16:03:37 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:34.824 [2024-07-22 16:03:37.636606] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.824 16:03:37 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:35.083 16:03:37 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:35.358 [2024-07-22 16:03:38.156749] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:35.358 [2024-07-22 16:03:38.156997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.358 16:03:38 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:35.624 malloc0 00:26:35.624 16:03:38 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:35.883 16:03:38 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:36.142 16:03:38 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:36.142 16:03:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:36.142 16:03:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:36.142 16:03:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:36.142 16:03:38 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:26:36.142 16:03:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:36.142 16:03:38 -- target/tls.sh@28 -- # bdevperf_pid=64916 00:26:36.142 16:03:38 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:36.142 16:03:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:36.142 16:03:38 -- target/tls.sh@31 -- # waitforlisten 64916 /var/tmp/bdevperf.sock 00:26:36.142 16:03:38 -- common/autotest_common.sh@819 -- # '[' -z 64916 ']' 00:26:36.142 16:03:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:36.142 16:03:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:36.142 16:03:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:36.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:36.142 16:03:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:36.142 16:03:38 -- common/autotest_common.sh@10 -- # set +x 00:26:36.402 [2024-07-22 16:03:39.006879] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:36.402 [2024-07-22 16:03:39.007228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64916 ] 00:26:36.402 [2024-07-22 16:03:39.149209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.402 [2024-07-22 16:03:39.206774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.338 16:03:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:37.338 16:03:39 -- common/autotest_common.sh@852 -- # return 0 00:26:37.338 16:03:39 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:37.596 [2024-07-22 16:03:40.221547] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:37.597 TLSTESTn1 00:26:37.597 16:03:40 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:37.597 Running I/O for 10 seconds... 00:26:49.807 00:26:49.807 Latency(us) 00:26:49.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.807 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:49.807 Verification LBA range: start 0x0 length 0x2000 00:26:49.807 TLSTESTn1 : 10.01 5430.80 21.21 0.00 0.00 23530.10 5183.30 37653.41 00:26:49.807 =================================================================================================================== 00:26:49.807 Total : 5430.80 21.21 0.00 0.00 23530.10 5183.30 37653.41 00:26:49.807 0 00:26:49.807 16:03:50 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:49.807 16:03:50 -- target/tls.sh@45 -- # killprocess 64916 00:26:49.807 16:03:50 -- common/autotest_common.sh@926 -- # '[' -z 64916 ']' 00:26:49.807 16:03:50 -- common/autotest_common.sh@930 -- # kill -0 64916 00:26:49.807 16:03:50 -- common/autotest_common.sh@931 -- # uname 00:26:49.807 16:03:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:49.807 16:03:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64916 00:26:49.807 killing process with pid 64916 00:26:49.807 Received shutdown signal, test time was about 10.000000 seconds 00:26:49.807 00:26:49.807 Latency(us) 00:26:49.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.807 =================================================================================================================== 00:26:49.807 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.807 16:03:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:49.807 16:03:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:49.807 16:03:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64916' 00:26:49.807 16:03:50 -- common/autotest_common.sh@945 -- # kill 64916 00:26:49.807 16:03:50 -- common/autotest_common.sh@950 -- # wait 64916 00:26:49.807 16:03:50 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:49.807 16:03:50 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:49.807 16:03:50 -- common/autotest_common.sh@640 -- # local es=0 00:26:49.807 16:03:50 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:49.807 16:03:50 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:26:49.807 16:03:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:49.807 16:03:50 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:26:49.807 16:03:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:49.807 16:03:50 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:49.807 16:03:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:49.807 16:03:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:49.807 16:03:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:49.807 16:03:50 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:26:49.807 16:03:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:49.807 16:03:50 -- target/tls.sh@28 -- # bdevperf_pid=65048 00:26:49.807 16:03:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:49.807 16:03:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:49.807 16:03:50 -- target/tls.sh@31 -- # waitforlisten 65048 /var/tmp/bdevperf.sock 00:26:49.807 16:03:50 -- common/autotest_common.sh@819 -- # '[' -z 65048 ']' 00:26:49.807 16:03:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:49.807 16:03:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:49.807 16:03:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:49.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:49.807 16:03:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:49.807 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:26:49.807 [2024-07-22 16:03:50.762411] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:49.807 [2024-07-22 16:03:50.762772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65048 ] 00:26:49.807 [2024-07-22 16:03:50.898206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.807 [2024-07-22 16:03:50.955613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.807 16:03:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:49.807 16:03:51 -- common/autotest_common.sh@852 -- # return 0 00:26:49.807 16:03:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:49.807 [2024-07-22 16:03:52.003305] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:49.807 [2024-07-22 16:03:52.003591] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:49.807 request: 00:26:49.807 { 00:26:49.807 "name": "TLSTEST", 00:26:49.807 "trtype": "tcp", 00:26:49.807 "traddr": "10.0.0.2", 00:26:49.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:49.807 "adrfam": "ipv4", 00:26:49.807 "trsvcid": "4420", 00:26:49.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.807 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:26:49.807 "method": "bdev_nvme_attach_controller", 00:26:49.807 "req_id": 1 00:26:49.807 } 00:26:49.807 Got JSON-RPC error response 00:26:49.807 response: 00:26:49.807 { 00:26:49.807 "code": -22, 00:26:49.807 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:26:49.807 } 00:26:49.807 16:03:52 -- target/tls.sh@36 -- # killprocess 65048 00:26:49.807 16:03:52 -- common/autotest_common.sh@926 -- # '[' -z 65048 ']' 00:26:49.807 16:03:52 -- common/autotest_common.sh@930 -- # kill -0 65048 00:26:49.807 16:03:52 -- common/autotest_common.sh@931 -- # uname 00:26:49.807 16:03:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:49.807 16:03:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65048 00:26:49.807 killing process with pid 65048 00:26:49.807 Received shutdown signal, test time was about 10.000000 seconds 00:26:49.807 00:26:49.807 Latency(us) 00:26:49.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.807 =================================================================================================================== 00:26:49.807 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:49.807 16:03:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:49.807 16:03:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:49.807 16:03:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65048' 00:26:49.807 16:03:52 -- common/autotest_common.sh@945 -- # kill 65048 00:26:49.807 16:03:52 -- common/autotest_common.sh@950 -- # wait 65048 00:26:49.807 16:03:52 -- target/tls.sh@37 -- # return 1 00:26:49.807 16:03:52 -- common/autotest_common.sh@643 -- # es=1 00:26:49.807 16:03:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:49.807 16:03:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:49.807 16:03:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:49.807 16:03:52 -- target/tls.sh@183 -- # killprocess 64857 00:26:49.807 16:03:52 -- common/autotest_common.sh@926 -- # '[' -z 64857 ']' 00:26:49.807 16:03:52 -- common/autotest_common.sh@930 -- # kill -0 64857 00:26:49.807 16:03:52 -- common/autotest_common.sh@931 -- # uname 00:26:49.807 16:03:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:49.807 16:03:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64857 00:26:49.807 killing process with pid 64857 00:26:49.807 16:03:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:49.807 16:03:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:49.807 16:03:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64857' 00:26:49.808 16:03:52 -- common/autotest_common.sh@945 -- # kill 64857 00:26:49.808 16:03:52 -- common/autotest_common.sh@950 -- # wait 64857 00:26:49.808 16:03:52 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:26:49.808 16:03:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:49.808 16:03:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:49.808 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:26:49.808 16:03:52 -- nvmf/common.sh@469 -- # nvmfpid=65086 00:26:49.808 16:03:52 -- nvmf/common.sh@470 -- # waitforlisten 65086 00:26:49.808 16:03:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:49.808 16:03:52 -- common/autotest_common.sh@819 -- # '[' -z 65086 ']' 00:26:49.808 16:03:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.808 16:03:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:49.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.808 16:03:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.808 16:03:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:49.808 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:26:49.808 [2024-07-22 16:03:52.515723] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:49.808 [2024-07-22 16:03:52.515820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.808 [2024-07-22 16:03:52.647253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.066 [2024-07-22 16:03:52.704144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:50.066 [2024-07-22 16:03:52.704755] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.066 [2024-07-22 16:03:52.704895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.066 [2024-07-22 16:03:52.704975] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.066 [2024-07-22 16:03:52.705088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.005 16:03:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:51.005 16:03:53 -- common/autotest_common.sh@852 -- # return 0 00:26:51.005 16:03:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:51.005 16:03:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:51.005 16:03:53 -- common/autotest_common.sh@10 -- # set +x 00:26:51.005 16:03:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.005 16:03:53 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:51.005 16:03:53 -- common/autotest_common.sh@640 -- # local es=0 00:26:51.005 16:03:53 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:51.005 16:03:53 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:26:51.005 16:03:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:51.005 16:03:53 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:26:51.005 16:03:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:51.005 16:03:53 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:51.005 16:03:53 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:51.005 16:03:53 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:51.005 [2024-07-22 16:03:53.752036] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.005 16:03:53 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:51.265 16:03:54 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:51.525 [2024-07-22 16:03:54.220141] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:51.525 [2024-07-22 16:03:54.220365] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.525 16:03:54 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:51.784 malloc0 00:26:51.784 16:03:54 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:52.043 16:03:54 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:52.301 [2024-07-22 16:03:55.006941] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:52.301 [2024-07-22 16:03:55.007008] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:26:52.301 [2024-07-22 16:03:55.007039] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:26:52.301 request: 00:26:52.301 { 00:26:52.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.301 "host": "nqn.2016-06.io.spdk:host1", 00:26:52.301 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:26:52.301 "method": "nvmf_subsystem_add_host", 00:26:52.301 "req_id": 1 00:26:52.301 } 00:26:52.301 Got JSON-RPC error response 00:26:52.301 response: 00:26:52.301 { 00:26:52.301 "code": -32603, 00:26:52.301 "message": "Internal error" 00:26:52.301 } 00:26:52.301 16:03:55 -- common/autotest_common.sh@643 -- # es=1 00:26:52.301 16:03:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:52.301 16:03:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:52.301 16:03:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:52.301 16:03:55 -- target/tls.sh@189 -- # killprocess 65086 00:26:52.301 16:03:55 -- common/autotest_common.sh@926 -- # '[' -z 65086 ']' 00:26:52.301 16:03:55 -- common/autotest_common.sh@930 -- # kill -0 65086 00:26:52.301 16:03:55 -- common/autotest_common.sh@931 -- # uname 00:26:52.301 16:03:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:52.301 16:03:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65086 00:26:52.301 16:03:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:52.301 16:03:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:52.301 16:03:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65086' 00:26:52.301 killing process with pid 65086 00:26:52.301 16:03:55 -- common/autotest_common.sh@945 -- # kill 65086 00:26:52.301 16:03:55 -- common/autotest_common.sh@950 -- # wait 65086 00:26:52.563 16:03:55 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:52.563 16:03:55 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:26:52.563 16:03:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:52.563 16:03:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:52.563 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:26:52.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.563 16:03:55 -- nvmf/common.sh@469 -- # nvmfpid=65152 00:26:52.563 16:03:55 -- nvmf/common.sh@470 -- # waitforlisten 65152 00:26:52.563 16:03:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:52.563 16:03:55 -- common/autotest_common.sh@819 -- # '[' -z 65152 ']' 00:26:52.563 16:03:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.563 16:03:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:52.563 16:03:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.563 16:03:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:52.563 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:26:52.563 [2024-07-22 16:03:55.302558] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:52.563 [2024-07-22 16:03:55.302646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.823 [2024-07-22 16:03:55.431720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.823 [2024-07-22 16:03:55.487677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:52.823 [2024-07-22 16:03:55.487836] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.823 [2024-07-22 16:03:55.487850] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.823 [2024-07-22 16:03:55.487859] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.823 [2024-07-22 16:03:55.487887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.760 16:03:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:53.760 16:03:56 -- common/autotest_common.sh@852 -- # return 0 00:26:53.760 16:03:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:53.760 16:03:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:53.760 16:03:56 -- common/autotest_common.sh@10 -- # set +x 00:26:53.760 16:03:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.760 16:03:56 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:53.760 16:03:56 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:53.760 16:03:56 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:53.760 [2024-07-22 16:03:56.606335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.018 16:03:56 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:54.277 16:03:56 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:54.535 [2024-07-22 16:03:57.198566] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:54.535 [2024-07-22 16:03:57.198798] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.535 16:03:57 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:54.794 malloc0 00:26:54.794 16:03:57 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:55.053 16:03:57 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:55.312 16:03:58 -- target/tls.sh@197 -- # bdevperf_pid=65212 00:26:55.312 16:03:58 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:55.312 16:03:58 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:55.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:55.312 16:03:58 -- target/tls.sh@200 -- # waitforlisten 65212 /var/tmp/bdevperf.sock 00:26:55.312 16:03:58 -- common/autotest_common.sh@819 -- # '[' -z 65212 ']' 00:26:55.312 16:03:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:55.312 16:03:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:55.312 16:03:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:55.312 16:03:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:55.312 16:03:58 -- common/autotest_common.sh@10 -- # set +x 00:26:55.312 [2024-07-22 16:03:58.059804] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:55.312 [2024-07-22 16:03:58.060114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65212 ] 00:26:55.571 [2024-07-22 16:03:58.195059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.571 [2024-07-22 16:03:58.264103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.163 16:03:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:56.163 16:03:59 -- common/autotest_common.sh@852 -- # return 0 00:26:56.163 16:03:59 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:26:56.421 [2024-07-22 16:03:59.222479] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:56.680 TLSTESTn1 00:26:56.680 16:03:59 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:56.939 16:03:59 -- target/tls.sh@205 -- # tgtconf='{ 00:26:56.939 "subsystems": [ 00:26:56.939 { 00:26:56.939 "subsystem": "iobuf", 00:26:56.939 "config": [ 00:26:56.939 { 00:26:56.939 "method": "iobuf_set_options", 00:26:56.939 "params": { 00:26:56.939 "small_pool_count": 8192, 00:26:56.939 "large_pool_count": 1024, 00:26:56.939 "small_bufsize": 8192, 00:26:56.939 "large_bufsize": 135168 00:26:56.939 } 00:26:56.939 } 00:26:56.939 ] 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "subsystem": "sock", 00:26:56.939 "config": [ 00:26:56.939 { 00:26:56.939 "method": "sock_impl_set_options", 00:26:56.939 "params": { 00:26:56.939 "impl_name": "uring", 00:26:56.939 "recv_buf_size": 2097152, 00:26:56.939 "send_buf_size": 2097152, 00:26:56.939 "enable_recv_pipe": true, 00:26:56.939 "enable_quickack": false, 00:26:56.939 "enable_placement_id": 0, 00:26:56.939 "enable_zerocopy_send_server": false, 00:26:56.939 "enable_zerocopy_send_client": false, 00:26:56.939 "zerocopy_threshold": 0, 00:26:56.939 "tls_version": 0, 00:26:56.939 "enable_ktls": false 00:26:56.939 } 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "method": "sock_impl_set_options", 00:26:56.939 "params": { 00:26:56.939 "impl_name": "posix", 00:26:56.939 "recv_buf_size": 2097152, 00:26:56.939 "send_buf_size": 2097152, 00:26:56.939 "enable_recv_pipe": true, 00:26:56.939 "enable_quickack": false, 00:26:56.939 "enable_placement_id": 0, 00:26:56.939 "enable_zerocopy_send_server": true, 00:26:56.939 "enable_zerocopy_send_client": false, 00:26:56.939 "zerocopy_threshold": 0, 00:26:56.939 "tls_version": 0, 00:26:56.939 "enable_ktls": false 00:26:56.939 } 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "method": "sock_impl_set_options", 00:26:56.939 "params": { 00:26:56.939 "impl_name": "ssl", 00:26:56.939 "recv_buf_size": 4096, 00:26:56.939 "send_buf_size": 4096, 00:26:56.939 "enable_recv_pipe": true, 00:26:56.939 "enable_quickack": false, 00:26:56.939 "enable_placement_id": 0, 00:26:56.939 "enable_zerocopy_send_server": true, 00:26:56.939 "enable_zerocopy_send_client": false, 00:26:56.939 "zerocopy_threshold": 0, 00:26:56.939 "tls_version": 0, 00:26:56.939 "enable_ktls": false 00:26:56.939 } 00:26:56.939 } 00:26:56.939 ] 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "subsystem": "vmd", 00:26:56.939 "config": [] 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "subsystem": "accel", 00:26:56.939 "config": [ 00:26:56.939 { 00:26:56.939 "method": "accel_set_options", 00:26:56.939 "params": { 00:26:56.939 "small_cache_size": 128, 00:26:56.939 "large_cache_size": 16, 00:26:56.939 "task_count": 2048, 00:26:56.939 "sequence_count": 2048, 00:26:56.939 "buf_count": 2048 00:26:56.939 } 00:26:56.939 } 00:26:56.939 ] 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "subsystem": "bdev", 00:26:56.939 "config": [ 00:26:56.939 { 00:26:56.939 "method": "bdev_set_options", 00:26:56.939 "params": { 00:26:56.939 "bdev_io_pool_size": 65535, 00:26:56.939 "bdev_io_cache_size": 256, 00:26:56.939 "bdev_auto_examine": true, 00:26:56.939 "iobuf_small_cache_size": 128, 00:26:56.939 "iobuf_large_cache_size": 16 00:26:56.939 } 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "method": "bdev_raid_set_options", 00:26:56.939 "params": { 00:26:56.939 "process_window_size_kb": 1024 00:26:56.939 } 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "method": "bdev_iscsi_set_options", 00:26:56.939 "params": { 00:26:56.939 "timeout_sec": 30 00:26:56.939 } 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "method": "bdev_nvme_set_options", 00:26:56.939 "params": { 00:26:56.939 "action_on_timeout": "none", 00:26:56.939 "timeout_us": 0, 00:26:56.939 "timeout_admin_us": 0, 00:26:56.939 "keep_alive_timeout_ms": 10000, 00:26:56.939 "transport_retry_count": 4, 00:26:56.939 "arbitration_burst": 0, 00:26:56.939 "low_priority_weight": 0, 00:26:56.939 "medium_priority_weight": 0, 00:26:56.939 "high_priority_weight": 0, 00:26:56.939 "nvme_adminq_poll_period_us": 10000, 00:26:56.939 "nvme_ioq_poll_period_us": 0, 00:26:56.939 "io_queue_requests": 0, 00:26:56.939 "delay_cmd_submit": true, 00:26:56.939 "bdev_retry_count": 3, 00:26:56.939 "transport_ack_timeout": 0, 00:26:56.939 "ctrlr_loss_timeout_sec": 0, 00:26:56.939 "reconnect_delay_sec": 0, 00:26:56.939 "fast_io_fail_timeout_sec": 0, 00:26:56.939 "generate_uuids": false, 00:26:56.939 "transport_tos": 0, 00:26:56.939 "io_path_stat": false, 00:26:56.939 "allow_accel_sequence": false 00:26:56.939 } 00:26:56.939 }, 00:26:56.939 { 00:26:56.939 "method": "bdev_nvme_set_hotplug", 00:26:56.940 "params": { 00:26:56.940 "period_us": 100000, 00:26:56.940 "enable": false 00:26:56.940 } 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "method": "bdev_malloc_create", 00:26:56.940 "params": { 00:26:56.940 "name": "malloc0", 00:26:56.940 "num_blocks": 8192, 00:26:56.940 "block_size": 4096, 00:26:56.940 "physical_block_size": 4096, 00:26:56.940 "uuid": "a5048a5b-6371-4ecc-b586-27e4f0a3a4aa", 00:26:56.940 "optimal_io_boundary": 0 00:26:56.940 } 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "method": "bdev_wait_for_examine" 00:26:56.940 } 00:26:56.940 ] 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "subsystem": "nbd", 00:26:56.940 "config": [] 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "subsystem": "scheduler", 00:26:56.940 "config": [ 00:26:56.940 { 00:26:56.940 "method": "framework_set_scheduler", 00:26:56.940 "params": { 00:26:56.940 "name": "static" 00:26:56.940 } 00:26:56.940 } 00:26:56.940 ] 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "subsystem": "nvmf", 00:26:56.940 "config": [ 00:26:56.940 { 00:26:56.940 "method": "nvmf_set_config", 00:26:56.940 "params": { 00:26:56.940 "discovery_filter": "match_any", 00:26:56.940 "admin_cmd_passthru": { 00:26:56.940 "identify_ctrlr": false 00:26:56.940 } 00:26:56.940 } 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "method": "nvmf_set_max_subsystems", 00:26:56.940 "params": { 00:26:56.940 "max_subsystems": 1024 00:26:56.940 } 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "method": "nvmf_set_crdt", 00:26:56.940 "params": { 00:26:56.940 "crdt1": 0, 00:26:56.940 "crdt2": 0, 00:26:56.940 "crdt3": 0 00:26:56.940 } 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "method": "nvmf_create_transport", 00:26:56.940 "params": { 00:26:56.940 "trtype": "TCP", 00:26:56.940 "max_queue_depth": 128, 00:26:56.940 "max_io_qpairs_per_ctrlr": 127, 00:26:56.940 "in_capsule_data_size": 4096, 00:26:56.940 "max_io_size": 131072, 00:26:56.940 "io_unit_size": 131072, 00:26:56.940 "max_aq_depth": 128, 00:26:56.940 "num_shared_buffers": 511, 00:26:56.940 "buf_cache_size": 4294967295, 00:26:56.940 "dif_insert_or_strip": false, 00:26:56.940 "zcopy": false, 00:26:56.940 "c2h_success": false, 00:26:56.940 "sock_priority": 0, 00:26:56.940 "abort_timeout_sec": 1 00:26:56.940 } 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "method": "nvmf_create_subsystem", 00:26:56.940 "params": { 00:26:56.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.940 "allow_any_host": false, 00:26:56.940 "serial_number": "SPDK00000000000001", 00:26:56.940 "model_number": "SPDK bdev Controller", 00:26:56.940 "max_namespaces": 10, 00:26:56.940 "min_cntlid": 1, 00:26:56.940 "max_cntlid": 65519, 00:26:56.940 "ana_reporting": false 00:26:56.940 } 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "method": "nvmf_subsystem_add_host", 00:26:56.940 "params": { 00:26:56.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.940 "host": "nqn.2016-06.io.spdk:host1", 00:26:56.940 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:26:56.940 } 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "method": "nvmf_subsystem_add_ns", 00:26:56.940 "params": { 00:26:56.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.940 "namespace": { 00:26:56.940 "nsid": 1, 00:26:56.940 "bdev_name": "malloc0", 00:26:56.940 "nguid": "A5048A5B63714ECCB58627E4F0A3A4AA", 00:26:56.940 "uuid": "a5048a5b-6371-4ecc-b586-27e4f0a3a4aa" 00:26:56.940 } 00:26:56.940 } 00:26:56.940 }, 00:26:56.940 { 00:26:56.940 "method": "nvmf_subsystem_add_listener", 00:26:56.940 "params": { 00:26:56.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.940 "listen_address": { 00:26:56.940 "trtype": "TCP", 00:26:56.940 "adrfam": "IPv4", 00:26:56.940 "traddr": "10.0.0.2", 00:26:56.940 "trsvcid": "4420" 00:26:56.940 }, 00:26:56.940 "secure_channel": true 00:26:56.940 } 00:26:56.940 } 00:26:56.940 ] 00:26:56.940 } 00:26:56.940 ] 00:26:56.940 }' 00:26:56.940 16:03:59 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:57.200 16:03:59 -- target/tls.sh@206 -- # bdevperfconf='{ 00:26:57.200 "subsystems": [ 00:26:57.200 { 00:26:57.200 "subsystem": "iobuf", 00:26:57.200 "config": [ 00:26:57.200 { 00:26:57.200 "method": "iobuf_set_options", 00:26:57.200 "params": { 00:26:57.200 "small_pool_count": 8192, 00:26:57.200 "large_pool_count": 1024, 00:26:57.200 "small_bufsize": 8192, 00:26:57.200 "large_bufsize": 135168 00:26:57.200 } 00:26:57.200 } 00:26:57.200 ] 00:26:57.200 }, 00:26:57.200 { 00:26:57.200 "subsystem": "sock", 00:26:57.200 "config": [ 00:26:57.200 { 00:26:57.200 "method": "sock_impl_set_options", 00:26:57.200 "params": { 00:26:57.200 "impl_name": "uring", 00:26:57.200 "recv_buf_size": 2097152, 00:26:57.200 "send_buf_size": 2097152, 00:26:57.200 "enable_recv_pipe": true, 00:26:57.200 "enable_quickack": false, 00:26:57.200 "enable_placement_id": 0, 00:26:57.200 "enable_zerocopy_send_server": false, 00:26:57.200 "enable_zerocopy_send_client": false, 00:26:57.200 "zerocopy_threshold": 0, 00:26:57.200 "tls_version": 0, 00:26:57.200 "enable_ktls": false 00:26:57.200 } 00:26:57.200 }, 00:26:57.200 { 00:26:57.200 "method": "sock_impl_set_options", 00:26:57.200 "params": { 00:26:57.200 "impl_name": "posix", 00:26:57.200 "recv_buf_size": 2097152, 00:26:57.200 "send_buf_size": 2097152, 00:26:57.200 "enable_recv_pipe": true, 00:26:57.200 "enable_quickack": false, 00:26:57.200 "enable_placement_id": 0, 00:26:57.200 "enable_zerocopy_send_server": true, 00:26:57.200 "enable_zerocopy_send_client": false, 00:26:57.200 "zerocopy_threshold": 0, 00:26:57.200 "tls_version": 0, 00:26:57.200 "enable_ktls": false 00:26:57.200 } 00:26:57.200 }, 00:26:57.200 { 00:26:57.200 "method": "sock_impl_set_options", 00:26:57.200 "params": { 00:26:57.200 "impl_name": "ssl", 00:26:57.200 "recv_buf_size": 4096, 00:26:57.200 "send_buf_size": 4096, 00:26:57.200 "enable_recv_pipe": true, 00:26:57.200 "enable_quickack": false, 00:26:57.200 "enable_placement_id": 0, 00:26:57.200 "enable_zerocopy_send_server": true, 00:26:57.200 "enable_zerocopy_send_client": false, 00:26:57.200 "zerocopy_threshold": 0, 00:26:57.200 "tls_version": 0, 00:26:57.200 "enable_ktls": false 00:26:57.200 } 00:26:57.200 } 00:26:57.200 ] 00:26:57.200 }, 00:26:57.200 { 00:26:57.200 "subsystem": "vmd", 00:26:57.200 "config": [] 00:26:57.200 }, 00:26:57.200 { 00:26:57.200 "subsystem": "accel", 00:26:57.200 "config": [ 00:26:57.200 { 00:26:57.200 "method": "accel_set_options", 00:26:57.200 "params": { 00:26:57.200 "small_cache_size": 128, 00:26:57.200 "large_cache_size": 16, 00:26:57.200 "task_count": 2048, 00:26:57.200 "sequence_count": 2048, 00:26:57.200 "buf_count": 2048 00:26:57.200 } 00:26:57.200 } 00:26:57.200 ] 00:26:57.200 }, 00:26:57.200 { 00:26:57.201 "subsystem": "bdev", 00:26:57.201 "config": [ 00:26:57.201 { 00:26:57.201 "method": "bdev_set_options", 00:26:57.201 "params": { 00:26:57.201 "bdev_io_pool_size": 65535, 00:26:57.201 "bdev_io_cache_size": 256, 00:26:57.201 "bdev_auto_examine": true, 00:26:57.201 "iobuf_small_cache_size": 128, 00:26:57.201 "iobuf_large_cache_size": 16 00:26:57.201 } 00:26:57.201 }, 00:26:57.201 { 00:26:57.201 "method": "bdev_raid_set_options", 00:26:57.201 "params": { 00:26:57.201 "process_window_size_kb": 1024 00:26:57.201 } 00:26:57.201 }, 00:26:57.201 { 00:26:57.201 "method": "bdev_iscsi_set_options", 00:26:57.201 "params": { 00:26:57.201 "timeout_sec": 30 00:26:57.201 } 00:26:57.201 }, 00:26:57.201 { 00:26:57.201 "method": "bdev_nvme_set_options", 00:26:57.201 "params": { 00:26:57.201 "action_on_timeout": "none", 00:26:57.201 "timeout_us": 0, 00:26:57.201 "timeout_admin_us": 0, 00:26:57.201 "keep_alive_timeout_ms": 10000, 00:26:57.201 "transport_retry_count": 4, 00:26:57.201 "arbitration_burst": 0, 00:26:57.201 "low_priority_weight": 0, 00:26:57.201 "medium_priority_weight": 0, 00:26:57.201 "high_priority_weight": 0, 00:26:57.201 "nvme_adminq_poll_period_us": 10000, 00:26:57.201 "nvme_ioq_poll_period_us": 0, 00:26:57.201 "io_queue_requests": 512, 00:26:57.201 "delay_cmd_submit": true, 00:26:57.201 "bdev_retry_count": 3, 00:26:57.201 "transport_ack_timeout": 0, 00:26:57.201 "ctrlr_loss_timeout_sec": 0, 00:26:57.201 "reconnect_delay_sec": 0, 00:26:57.201 "fast_io_fail_timeout_sec": 0, 00:26:57.201 "generate_uuids": false, 00:26:57.201 "transport_tos": 0, 00:26:57.201 "io_path_stat": false, 00:26:57.201 "allow_accel_sequence": false 00:26:57.201 } 00:26:57.201 }, 00:26:57.201 { 00:26:57.201 "method": "bdev_nvme_attach_controller", 00:26:57.201 "params": { 00:26:57.201 "name": "TLSTEST", 00:26:57.201 "trtype": "TCP", 00:26:57.201 "adrfam": "IPv4", 00:26:57.201 "traddr": "10.0.0.2", 00:26:57.201 "trsvcid": "4420", 00:26:57.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.201 "prchk_reftag": false, 00:26:57.201 "prchk_guard": false, 00:26:57.201 "ctrlr_loss_timeout_sec": 0, 00:26:57.201 "reconnect_delay_sec": 0, 00:26:57.201 "fast_io_fail_timeout_sec": 0, 00:26:57.201 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:26:57.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:57.201 "hdgst": false, 00:26:57.201 "ddgst": false 00:26:57.201 } 00:26:57.201 }, 00:26:57.201 { 00:26:57.201 "method": "bdev_nvme_set_hotplug", 00:26:57.201 "params": { 00:26:57.201 "period_us": 100000, 00:26:57.201 "enable": false 00:26:57.201 } 00:26:57.201 }, 00:26:57.201 { 00:26:57.201 "method": "bdev_wait_for_examine" 00:26:57.201 } 00:26:57.201 ] 00:26:57.201 }, 00:26:57.201 { 00:26:57.201 "subsystem": "nbd", 00:26:57.201 "config": [] 00:26:57.201 } 00:26:57.201 ] 00:26:57.201 }' 00:26:57.201 16:03:59 -- target/tls.sh@208 -- # killprocess 65212 00:26:57.201 16:03:59 -- common/autotest_common.sh@926 -- # '[' -z 65212 ']' 00:26:57.201 16:03:59 -- common/autotest_common.sh@930 -- # kill -0 65212 00:26:57.201 16:03:59 -- common/autotest_common.sh@931 -- # uname 00:26:57.201 16:03:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:57.201 16:03:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65212 00:26:57.201 killing process with pid 65212 00:26:57.201 Received shutdown signal, test time was about 10.000000 seconds 00:26:57.201 00:26:57.201 Latency(us) 00:26:57.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.201 =================================================================================================================== 00:26:57.201 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:57.201 16:03:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:26:57.201 16:03:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:26:57.201 16:03:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65212' 00:26:57.201 16:03:59 -- common/autotest_common.sh@945 -- # kill 65212 00:26:57.201 16:03:59 -- common/autotest_common.sh@950 -- # wait 65212 00:26:57.459 16:04:00 -- target/tls.sh@209 -- # killprocess 65152 00:26:57.459 16:04:00 -- common/autotest_common.sh@926 -- # '[' -z 65152 ']' 00:26:57.459 16:04:00 -- common/autotest_common.sh@930 -- # kill -0 65152 00:26:57.459 16:04:00 -- common/autotest_common.sh@931 -- # uname 00:26:57.459 16:04:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:57.459 16:04:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65152 00:26:57.459 killing process with pid 65152 00:26:57.459 16:04:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:57.459 16:04:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:57.459 16:04:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65152' 00:26:57.459 16:04:00 -- common/autotest_common.sh@945 -- # kill 65152 00:26:57.459 16:04:00 -- common/autotest_common.sh@950 -- # wait 65152 00:26:57.719 16:04:00 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:57.719 16:04:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:57.719 16:04:00 -- target/tls.sh@212 -- # echo '{ 00:26:57.719 "subsystems": [ 00:26:57.719 { 00:26:57.719 "subsystem": "iobuf", 00:26:57.719 "config": [ 00:26:57.719 { 00:26:57.719 "method": "iobuf_set_options", 00:26:57.719 "params": { 00:26:57.719 "small_pool_count": 8192, 00:26:57.719 "large_pool_count": 1024, 00:26:57.719 "small_bufsize": 8192, 00:26:57.719 "large_bufsize": 135168 00:26:57.719 } 00:26:57.719 } 00:26:57.719 ] 00:26:57.719 }, 00:26:57.719 { 00:26:57.719 "subsystem": "sock", 00:26:57.719 "config": [ 00:26:57.719 { 00:26:57.719 "method": "sock_impl_set_options", 00:26:57.719 "params": { 00:26:57.719 "impl_name": "uring", 00:26:57.719 "recv_buf_size": 2097152, 00:26:57.719 "send_buf_size": 2097152, 00:26:57.719 "enable_recv_pipe": true, 00:26:57.719 "enable_quickack": false, 00:26:57.719 "enable_placement_id": 0, 00:26:57.719 "enable_zerocopy_send_server": false, 00:26:57.719 "enable_zerocopy_send_client": false, 00:26:57.719 "zerocopy_threshold": 0, 00:26:57.719 "tls_version": 0, 00:26:57.719 "enable_ktls": false 00:26:57.719 } 00:26:57.719 }, 00:26:57.719 { 00:26:57.719 "method": "sock_impl_set_options", 00:26:57.719 "params": { 00:26:57.719 "impl_name": "posix", 00:26:57.719 "recv_buf_size": 2097152, 00:26:57.719 "send_buf_size": 2097152, 00:26:57.719 "enable_recv_pipe": true, 00:26:57.719 "enable_quickack": false, 00:26:57.719 "enable_placement_id": 0, 00:26:57.719 "enable_zerocopy_send_server": true, 00:26:57.719 "enable_zerocopy_send_client": false, 00:26:57.719 "zerocopy_threshold": 0, 00:26:57.719 "tls_version": 0, 00:26:57.719 "enable_ktls": false 00:26:57.719 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "sock_impl_set_options", 00:26:57.720 "params": { 00:26:57.720 "impl_name": "ssl", 00:26:57.720 "recv_buf_size": 4096, 00:26:57.720 "send_buf_size": 4096, 00:26:57.720 "enable_recv_pipe": true, 00:26:57.720 "enable_quickack": false, 00:26:57.720 "enable_placement_id": 0, 00:26:57.720 "enable_zerocopy_send_server": true, 00:26:57.720 "enable_zerocopy_send_client": false, 00:26:57.720 "zerocopy_threshold": 0, 00:26:57.720 "tls_version": 0, 00:26:57.720 "enable_ktls": false 00:26:57.720 } 00:26:57.720 } 00:26:57.720 ] 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "subsystem": "vmd", 00:26:57.720 "config": [] 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "subsystem": "accel", 00:26:57.720 "config": [ 00:26:57.720 { 00:26:57.720 "method": "accel_set_options", 00:26:57.720 "params": { 00:26:57.720 "small_cache_size": 128, 00:26:57.720 "large_cache_size": 16, 00:26:57.720 "task_count": 2048, 00:26:57.720 "sequence_count": 2048, 00:26:57.720 "buf_count": 2048 00:26:57.720 } 00:26:57.720 } 00:26:57.720 ] 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "subsystem": "bdev", 00:26:57.720 "config": [ 00:26:57.720 { 00:26:57.720 "method": "bdev_set_options", 00:26:57.720 "params": { 00:26:57.720 "bdev_io_pool_size": 65535, 00:26:57.720 "bdev_io_cache_size": 256, 00:26:57.720 "bdev_auto_examine": true, 00:26:57.720 "iobuf_small_cache_size": 128, 00:26:57.720 "iobuf_large_cache_size": 16 00:26:57.720 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "bdev_raid_set_options", 00:26:57.720 "params": { 00:26:57.720 "process_window_size_kb": 1024 00:26:57.720 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "bdev_iscsi_set_options", 00:26:57.720 "params": { 00:26:57.720 "timeout_sec": 30 00:26:57.720 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "bdev_nvme_set_options", 00:26:57.720 "params": { 00:26:57.720 "action_on_timeout": "none", 00:26:57.720 "timeout_us": 0, 00:26:57.720 "timeout_admin_us": 0, 00:26:57.720 "keep_alive_timeout_ms": 10000, 00:26:57.720 "transport_retry_count": 4, 00:26:57.720 "arbitration_burst": 0, 00:26:57.720 "low_priority_weight": 0, 00:26:57.720 "medium_priority_weight": 0, 00:26:57.720 "high_priority_weight": 0, 00:26:57.720 "nvme_adminq_poll_period_us": 10000, 00:26:57.720 "nvme_ioq_poll_period_us": 0, 00:26:57.720 "io_queue_requests": 0, 00:26:57.720 "delay_cmd_submit": true, 00:26:57.720 "bdev_retry_count": 3, 00:26:57.720 "transport_ack_timeout": 0, 00:26:57.720 "ctrlr_loss_timeout_sec": 0, 00:26:57.720 "reconnect_delay_sec": 0, 00:26:57.720 "fast_io_fail_timeout_sec": 0, 00:26:57.720 "generate_uuids": false, 00:26:57.720 "transport_tos": 0, 00:26:57.720 "io_path_stat": false, 00:26:57.720 "allow_accel_sequence": false 00:26:57.720 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "bdev_nvme_set_hotplug", 00:26:57.720 "params": { 00:26:57.720 "period_us": 100000, 00:26:57.720 "enable": false 00:26:57.720 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "bdev_malloc_create", 00:26:57.720 "params": { 00:26:57.720 "name": "malloc0", 00:26:57.720 "num_blocks": 8192, 00:26:57.720 "block_size": 4096, 00:26:57.720 "physical_block_size": 4096, 00:26:57.720 "uuid": "a5048a5b-6371-4ecc-b586-27e4f0a3a4aa", 00:26:57.720 "optimal_io_boundary": 0 00:26:57.720 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "bdev_wait_for_examine" 00:26:57.720 } 00:26:57.720 ] 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "subsystem": "nbd", 00:26:57.720 "config": [] 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "subsystem": "scheduler", 00:26:57.720 "config": [ 00:26:57.720 { 00:26:57.720 "method": "framework_set_scheduler", 00:26:57.720 "params": { 00:26:57.720 "name": "static" 00:26:57.720 } 00:26:57.720 } 00:26:57.720 ] 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "subsystem": "nvmf", 00:26:57.720 "config": [ 00:26:57.720 { 00:26:57.720 "method": "nvmf_set_config", 00:26:57.720 "params": { 00:26:57.720 "discovery_filter": "match_any", 00:26:57.720 "admin_cmd_passthru": { 00:26:57.720 "identify_ctrlr": false 00:26:57.720 } 00:26:57.720 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "nvmf_set_max_subsystems", 00:26:57.720 "params": { 00:26:57.720 "max_subsystems": 1024 00:26:57.720 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "nvmf_set_crdt", 00:26:57.720 "params": { 00:26:57.720 "crdt1": 0, 00:26:57.720 "crdt2": 0, 00:26:57.720 "crdt3": 0 00:26:57.720 } 00:26:57.720 }, 00:26:57.720 { 00:26:57.720 "method": "nvmf_create_transport", 00:26:57.720 "params": { 00:26:57.720 "trtype": "TCP", 00:26:57.720 "max_queue_depth": 128, 00:26:57.720 "max_io_qpairs_per_ctrlr": 127, 00:26:57.720 "in_capsule_data_size": 4096, 00:26:57.720 "max_io_size": 131072, 00:26:57.720 "io_unit_size": 131072, 00:26:57.720 "max_aq_depth": 128, 00:26:57.720 "num_shared_buffers": 511, 00:26:57.720 "buf_cache_size": 4294967295, 00:26:57.720 "dif_insert_or_strip": false, 00:26:57.720 "zcopy": false, 00:26:57.721 "c2h_success": false, 00:26:57.721 "sock_priority": 0, 00:26:57.721 "abort_timeout_sec": 1 00:26:57.721 } 00:26:57.721 }, 00:26:57.721 { 00:26:57.721 "method": "nvmf_create_subsystem", 00:26:57.721 "params": { 00:26:57.721 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.721 "allow_any_host": false, 00:26:57.721 "serial_number": "SPDK00000000000001", 00:26:57.721 "model_number": "SPDK bdev Controller", 00:26:57.721 "max_namespaces": 10, 00:26:57.721 "min_cntlid": 1, 00:26:57.721 "max_cntlid": 65519, 00:26:57.721 "ana_reporting": false 00:26:57.721 } 00:26:57.721 }, 00:26:57.721 { 00:26:57.721 "method": "nvmf_subsystem_add_host", 00:26:57.721 "params": { 00:26:57.721 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.721 "host": "nqn.2016-06.io.spdk:host1", 00:26:57.721 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:26:57.721 } 00:26:57.721 }, 00:26:57.721 { 00:26:57.721 "method": "nvmf_subsystem_add_ns", 00:26:57.721 "params": { 00:26:57.721 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.721 "namespace": { 00:26:57.721 "nsid": 1, 00:26:57.721 "bdev_name": "malloc0", 00:26:57.721 "nguid": "A5048A5B63714ECCB58627E4F0A3A4AA", 00:26:57.721 "uuid": "a5048a5b-6371-4ecc-b586-27e4f0a3a4aa" 00:26:57.721 } 00:26:57.721 } 00:26:57.721 }, 00:26:57.721 { 00:26:57.721 "method": "nvmf_subsystem_add_listener", 00:26:57.721 "params": { 00:26:57.721 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.721 "listen_address": { 00:26:57.721 "trtype": "TCP", 00:26:57.721 "adrfam": "IPv4", 00:26:57.721 "traddr": "10.0.0.2", 00:26:57.721 "trsvcid": "4420" 00:26:57.721 }, 00:26:57.721 "secure_channel": true 00:26:57.721 } 00:26:57.721 } 00:26:57.721 ] 00:26:57.721 } 00:26:57.721 ] 00:26:57.721 }' 00:26:57.721 16:04:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:57.721 16:04:00 -- common/autotest_common.sh@10 -- # set +x 00:26:57.721 16:04:00 -- nvmf/common.sh@469 -- # nvmfpid=65255 00:26:57.721 16:04:00 -- nvmf/common.sh@470 -- # waitforlisten 65255 00:26:57.721 16:04:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:57.721 16:04:00 -- common/autotest_common.sh@819 -- # '[' -z 65255 ']' 00:26:57.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.721 16:04:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.721 16:04:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:57.721 16:04:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.721 16:04:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:57.721 16:04:00 -- common/autotest_common.sh@10 -- # set +x 00:26:57.721 [2024-07-22 16:04:00.398190] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:57.721 [2024-07-22 16:04:00.398470] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.721 [2024-07-22 16:04:00.540561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.980 [2024-07-22 16:04:00.597674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:57.980 [2024-07-22 16:04:00.597817] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.980 [2024-07-22 16:04:00.597831] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.980 [2024-07-22 16:04:00.597840] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.980 [2024-07-22 16:04:00.597870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.980 [2024-07-22 16:04:00.779016] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.980 [2024-07-22 16:04:00.810967] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:57.980 [2024-07-22 16:04:00.811167] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.547 16:04:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:58.547 16:04:01 -- common/autotest_common.sh@852 -- # return 0 00:26:58.547 16:04:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:58.547 16:04:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:58.547 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:58.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:58.805 16:04:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.805 16:04:01 -- target/tls.sh@216 -- # bdevperf_pid=65287 00:26:58.805 16:04:01 -- target/tls.sh@217 -- # waitforlisten 65287 /var/tmp/bdevperf.sock 00:26:58.805 16:04:01 -- common/autotest_common.sh@819 -- # '[' -z 65287 ']' 00:26:58.805 16:04:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:58.805 16:04:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:58.805 16:04:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:58.805 16:04:01 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:58.805 16:04:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:58.805 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:58.805 16:04:01 -- target/tls.sh@213 -- # echo '{ 00:26:58.805 "subsystems": [ 00:26:58.805 { 00:26:58.805 "subsystem": "iobuf", 00:26:58.805 "config": [ 00:26:58.805 { 00:26:58.805 "method": "iobuf_set_options", 00:26:58.805 "params": { 00:26:58.805 "small_pool_count": 8192, 00:26:58.805 "large_pool_count": 1024, 00:26:58.805 "small_bufsize": 8192, 00:26:58.805 "large_bufsize": 135168 00:26:58.805 } 00:26:58.805 } 00:26:58.805 ] 00:26:58.805 }, 00:26:58.805 { 00:26:58.805 "subsystem": "sock", 00:26:58.805 "config": [ 00:26:58.805 { 00:26:58.805 "method": "sock_impl_set_options", 00:26:58.805 "params": { 00:26:58.805 "impl_name": "uring", 00:26:58.805 "recv_buf_size": 2097152, 00:26:58.805 "send_buf_size": 2097152, 00:26:58.805 "enable_recv_pipe": true, 00:26:58.805 "enable_quickack": false, 00:26:58.805 "enable_placement_id": 0, 00:26:58.805 "enable_zerocopy_send_server": false, 00:26:58.805 "enable_zerocopy_send_client": false, 00:26:58.805 "zerocopy_threshold": 0, 00:26:58.805 "tls_version": 0, 00:26:58.805 "enable_ktls": false 00:26:58.805 } 00:26:58.805 }, 00:26:58.805 { 00:26:58.805 "method": "sock_impl_set_options", 00:26:58.805 "params": { 00:26:58.805 "impl_name": "posix", 00:26:58.805 "recv_buf_size": 2097152, 00:26:58.805 "send_buf_size": 2097152, 00:26:58.805 "enable_recv_pipe": true, 00:26:58.806 "enable_quickack": false, 00:26:58.806 "enable_placement_id": 0, 00:26:58.806 "enable_zerocopy_send_server": true, 00:26:58.806 "enable_zerocopy_send_client": false, 00:26:58.806 "zerocopy_threshold": 0, 00:26:58.806 "tls_version": 0, 00:26:58.806 "enable_ktls": false 00:26:58.806 } 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "method": "sock_impl_set_options", 00:26:58.806 "params": { 00:26:58.806 "impl_name": "ssl", 00:26:58.806 "recv_buf_size": 4096, 00:26:58.806 "send_buf_size": 4096, 00:26:58.806 "enable_recv_pipe": true, 00:26:58.806 "enable_quickack": false, 00:26:58.806 "enable_placement_id": 0, 00:26:58.806 "enable_zerocopy_send_server": true, 00:26:58.806 "enable_zerocopy_send_client": false, 00:26:58.806 "zerocopy_threshold": 0, 00:26:58.806 "tls_version": 0, 00:26:58.806 "enable_ktls": false 00:26:58.806 } 00:26:58.806 } 00:26:58.806 ] 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "subsystem": "vmd", 00:26:58.806 "config": [] 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "subsystem": "accel", 00:26:58.806 "config": [ 00:26:58.806 { 00:26:58.806 "method": "accel_set_options", 00:26:58.806 "params": { 00:26:58.806 "small_cache_size": 128, 00:26:58.806 "large_cache_size": 16, 00:26:58.806 "task_count": 2048, 00:26:58.806 "sequence_count": 2048, 00:26:58.806 "buf_count": 2048 00:26:58.806 } 00:26:58.806 } 00:26:58.806 ] 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "subsystem": "bdev", 00:26:58.806 "config": [ 00:26:58.806 { 00:26:58.806 "method": "bdev_set_options", 00:26:58.806 "params": { 00:26:58.806 "bdev_io_pool_size": 65535, 00:26:58.806 "bdev_io_cache_size": 256, 00:26:58.806 "bdev_auto_examine": true, 00:26:58.806 "iobuf_small_cache_size": 128, 00:26:58.806 "iobuf_large_cache_size": 16 00:26:58.806 } 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "method": "bdev_raid_set_options", 00:26:58.806 "params": { 00:26:58.806 "process_window_size_kb": 1024 00:26:58.806 } 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "method": "bdev_iscsi_set_options", 00:26:58.806 "params": { 00:26:58.806 "timeout_sec": 30 00:26:58.806 } 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "method": "bdev_nvme_set_options", 00:26:58.806 "params": { 00:26:58.806 "action_on_timeout": "none", 00:26:58.806 "timeout_us": 0, 00:26:58.806 "timeout_admin_us": 0, 00:26:58.806 "keep_alive_timeout_ms": 10000, 00:26:58.806 "transport_retry_count": 4, 00:26:58.806 "arbitration_burst": 0, 00:26:58.806 "low_priority_weight": 0, 00:26:58.806 "medium_priority_weight": 0, 00:26:58.806 "high_priority_weight": 0, 00:26:58.806 "nvme_adminq_poll_period_us": 10000, 00:26:58.806 "nvme_ioq_poll_period_us": 0, 00:26:58.806 "io_queue_requests": 512, 00:26:58.806 "delay_cmd_submit": true, 00:26:58.806 "bdev_retry_count": 3, 00:26:58.806 "transport_ack_timeout": 0, 00:26:58.806 "ctrlr_loss_timeout_sec": 0, 00:26:58.806 "reconnect_delay_sec": 0, 00:26:58.806 "fast_io_fail_timeout_sec": 0, 00:26:58.806 "generate_uuids": false, 00:26:58.806 "transport_tos": 0, 00:26:58.806 "io_path_stat": false, 00:26:58.806 "allow_accel_sequence": false 00:26:58.806 } 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "method": "bdev_nvme_attach_controller", 00:26:58.806 "params": { 00:26:58.806 "name": "TLSTEST", 00:26:58.806 "trtype": "TCP", 00:26:58.806 "adrfam": "IPv4", 00:26:58.806 "traddr": "10.0.0.2", 00:26:58.806 "trsvcid": "4420", 00:26:58.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:58.806 "prchk_reftag": false, 00:26:58.806 "prchk_guard": false, 00:26:58.806 "ctrlr_loss_timeout_sec": 0, 00:26:58.806 "reconnect_delay_sec": 0, 00:26:58.806 "fast_io_fail_timeout_sec": 0, 00:26:58.806 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:26:58.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:58.806 "hdgst": false, 00:26:58.806 "ddgst": false 00:26:58.806 } 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "method": "bdev_nvme_set_hotplug", 00:26:58.806 "params": { 00:26:58.806 "period_us": 100000, 00:26:58.806 "enable": false 00:26:58.806 } 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "method": "bdev_wait_for_examine" 00:26:58.806 } 00:26:58.806 ] 00:26:58.806 }, 00:26:58.806 { 00:26:58.806 "subsystem": "nbd", 00:26:58.806 "config": [] 00:26:58.806 } 00:26:58.806 ] 00:26:58.806 }' 00:26:58.806 [2024-07-22 16:04:01.473347] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:58.806 [2024-07-22 16:04:01.474145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65287 ] 00:26:58.806 [2024-07-22 16:04:01.612618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.065 [2024-07-22 16:04:01.682976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.065 [2024-07-22 16:04:01.813470] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:59.630 16:04:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:59.630 16:04:02 -- common/autotest_common.sh@852 -- # return 0 00:26:59.630 16:04:02 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:59.888 Running I/O for 10 seconds... 00:27:09.896 00:27:09.896 Latency(us) 00:27:09.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.896 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:09.896 Verification LBA range: start 0x0 length 0x2000 00:27:09.896 TLSTESTn1 : 10.01 5196.09 20.30 0.00 0.00 24596.40 3872.58 35746.91 00:27:09.896 =================================================================================================================== 00:27:09.896 Total : 5196.09 20.30 0.00 0.00 24596.40 3872.58 35746.91 00:27:09.896 0 00:27:09.897 16:04:12 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.897 16:04:12 -- target/tls.sh@223 -- # killprocess 65287 00:27:09.897 16:04:12 -- common/autotest_common.sh@926 -- # '[' -z 65287 ']' 00:27:09.897 16:04:12 -- common/autotest_common.sh@930 -- # kill -0 65287 00:27:09.897 16:04:12 -- common/autotest_common.sh@931 -- # uname 00:27:09.897 16:04:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:09.897 16:04:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65287 00:27:09.897 killing process with pid 65287 00:27:09.897 Received shutdown signal, test time was about 10.000000 seconds 00:27:09.897 00:27:09.897 Latency(us) 00:27:09.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.897 =================================================================================================================== 00:27:09.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.897 16:04:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:27:09.897 16:04:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:27:09.897 16:04:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65287' 00:27:09.897 16:04:12 -- common/autotest_common.sh@945 -- # kill 65287 00:27:09.897 16:04:12 -- common/autotest_common.sh@950 -- # wait 65287 00:27:10.157 16:04:12 -- target/tls.sh@224 -- # killprocess 65255 00:27:10.157 16:04:12 -- common/autotest_common.sh@926 -- # '[' -z 65255 ']' 00:27:10.157 16:04:12 -- common/autotest_common.sh@930 -- # kill -0 65255 00:27:10.157 16:04:12 -- common/autotest_common.sh@931 -- # uname 00:27:10.157 16:04:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:10.157 16:04:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65255 00:27:10.157 killing process with pid 65255 00:27:10.157 16:04:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:10.157 16:04:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:10.157 16:04:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65255' 00:27:10.157 16:04:12 -- common/autotest_common.sh@945 -- # kill 65255 00:27:10.157 16:04:12 -- common/autotest_common.sh@950 -- # wait 65255 00:27:10.418 16:04:13 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:27:10.418 16:04:13 -- target/tls.sh@227 -- # cleanup 00:27:10.418 16:04:13 -- target/tls.sh@15 -- # process_shm --id 0 00:27:10.418 16:04:13 -- common/autotest_common.sh@796 -- # type=--id 00:27:10.418 16:04:13 -- common/autotest_common.sh@797 -- # id=0 00:27:10.418 16:04:13 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:27:10.418 16:04:13 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:10.418 16:04:13 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:27:10.418 16:04:13 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:27:10.418 16:04:13 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:27:10.418 16:04:13 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:10.418 nvmf_trace.0 00:27:10.418 16:04:13 -- common/autotest_common.sh@811 -- # return 0 00:27:10.418 16:04:13 -- target/tls.sh@16 -- # killprocess 65287 00:27:10.418 16:04:13 -- common/autotest_common.sh@926 -- # '[' -z 65287 ']' 00:27:10.418 16:04:13 -- common/autotest_common.sh@930 -- # kill -0 65287 00:27:10.418 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (65287) - No such process 00:27:10.418 Process with pid 65287 is not found 00:27:10.418 16:04:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 65287 is not found' 00:27:10.418 16:04:13 -- target/tls.sh@17 -- # nvmftestfini 00:27:10.418 16:04:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:10.418 16:04:13 -- nvmf/common.sh@116 -- # sync 00:27:10.418 16:04:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:10.418 16:04:13 -- nvmf/common.sh@119 -- # set +e 00:27:10.418 16:04:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:10.418 16:04:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:10.418 rmmod nvme_tcp 00:27:10.418 rmmod nvme_fabrics 00:27:10.418 rmmod nvme_keyring 00:27:10.418 16:04:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:10.418 16:04:13 -- nvmf/common.sh@123 -- # set -e 00:27:10.418 16:04:13 -- nvmf/common.sh@124 -- # return 0 00:27:10.418 16:04:13 -- nvmf/common.sh@477 -- # '[' -n 65255 ']' 00:27:10.418 16:04:13 -- nvmf/common.sh@478 -- # killprocess 65255 00:27:10.418 16:04:13 -- common/autotest_common.sh@926 -- # '[' -z 65255 ']' 00:27:10.418 16:04:13 -- common/autotest_common.sh@930 -- # kill -0 65255 00:27:10.418 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (65255) - No such process 00:27:10.418 Process with pid 65255 is not found 00:27:10.418 16:04:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 65255 is not found' 00:27:10.418 16:04:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:10.418 16:04:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:10.418 16:04:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:10.418 16:04:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.418 16:04:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:10.418 16:04:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.418 16:04:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.418 16:04:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.418 16:04:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:10.418 16:04:13 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:27:10.418 ************************************ 00:27:10.418 END TEST nvmf_tls 00:27:10.418 ************************************ 00:27:10.418 00:27:10.418 real 1m11.525s 00:27:10.419 user 1m52.640s 00:27:10.419 sys 0m23.194s 00:27:10.419 16:04:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.419 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:27:10.682 16:04:13 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:27:10.682 16:04:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:10.682 16:04:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:10.682 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:27:10.682 ************************************ 00:27:10.682 START TEST nvmf_fips 00:27:10.682 ************************************ 00:27:10.682 16:04:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:27:10.682 * Looking for test storage... 00:27:10.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:27:10.682 16:04:13 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:10.682 16:04:13 -- nvmf/common.sh@7 -- # uname -s 00:27:10.682 16:04:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.682 16:04:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.682 16:04:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.682 16:04:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.682 16:04:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.682 16:04:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.682 16:04:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.682 16:04:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.682 16:04:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.682 16:04:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.682 16:04:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:27:10.682 16:04:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:27:10.682 16:04:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.682 16:04:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.682 16:04:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:10.682 16:04:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:10.682 16:04:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.682 16:04:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.682 16:04:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.683 16:04:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.683 16:04:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.683 16:04:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.683 16:04:13 -- paths/export.sh@5 -- # export PATH 00:27:10.683 16:04:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.683 16:04:13 -- nvmf/common.sh@46 -- # : 0 00:27:10.683 16:04:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:10.683 16:04:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:10.683 16:04:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:10.683 16:04:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.683 16:04:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.683 16:04:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:10.683 16:04:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:10.683 16:04:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:10.683 16:04:13 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:10.683 16:04:13 -- fips/fips.sh@89 -- # check_openssl_version 00:27:10.683 16:04:13 -- fips/fips.sh@83 -- # local target=3.0.0 00:27:10.683 16:04:13 -- fips/fips.sh@85 -- # awk '{print $2}' 00:27:10.683 16:04:13 -- fips/fips.sh@85 -- # openssl version 00:27:10.683 16:04:13 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:27:10.683 16:04:13 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:27:10.683 16:04:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:10.683 16:04:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:10.683 16:04:13 -- scripts/common.sh@335 -- # IFS=.-: 00:27:10.683 16:04:13 -- scripts/common.sh@335 -- # read -ra ver1 00:27:10.683 16:04:13 -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.683 16:04:13 -- scripts/common.sh@336 -- # read -ra ver2 00:27:10.683 16:04:13 -- scripts/common.sh@337 -- # local 'op=>=' 00:27:10.683 16:04:13 -- scripts/common.sh@339 -- # ver1_l=3 00:27:10.683 16:04:13 -- scripts/common.sh@340 -- # ver2_l=3 00:27:10.683 16:04:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:10.683 16:04:13 -- scripts/common.sh@343 -- # case "$op" in 00:27:10.683 16:04:13 -- scripts/common.sh@347 -- # : 1 00:27:10.683 16:04:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:10.683 16:04:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.683 16:04:13 -- scripts/common.sh@364 -- # decimal 3 00:27:10.683 16:04:13 -- scripts/common.sh@352 -- # local d=3 00:27:10.683 16:04:13 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:27:10.683 16:04:13 -- scripts/common.sh@354 -- # echo 3 00:27:10.683 16:04:13 -- scripts/common.sh@364 -- # ver1[v]=3 00:27:10.683 16:04:13 -- scripts/common.sh@365 -- # decimal 3 00:27:10.683 16:04:13 -- scripts/common.sh@352 -- # local d=3 00:27:10.683 16:04:13 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:27:10.683 16:04:13 -- scripts/common.sh@354 -- # echo 3 00:27:10.683 16:04:13 -- scripts/common.sh@365 -- # ver2[v]=3 00:27:10.683 16:04:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:10.683 16:04:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:10.683 16:04:13 -- scripts/common.sh@363 -- # (( v++ )) 00:27:10.683 16:04:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.683 16:04:13 -- scripts/common.sh@364 -- # decimal 0 00:27:10.683 16:04:13 -- scripts/common.sh@352 -- # local d=0 00:27:10.683 16:04:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:27:10.683 16:04:13 -- scripts/common.sh@354 -- # echo 0 00:27:10.683 16:04:13 -- scripts/common.sh@364 -- # ver1[v]=0 00:27:10.683 16:04:13 -- scripts/common.sh@365 -- # decimal 0 00:27:10.683 16:04:13 -- scripts/common.sh@352 -- # local d=0 00:27:10.683 16:04:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:27:10.683 16:04:13 -- scripts/common.sh@354 -- # echo 0 00:27:10.683 16:04:13 -- scripts/common.sh@365 -- # ver2[v]=0 00:27:10.683 16:04:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:10.683 16:04:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:10.683 16:04:13 -- scripts/common.sh@363 -- # (( v++ )) 00:27:10.683 16:04:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.683 16:04:13 -- scripts/common.sh@364 -- # decimal 9 00:27:10.683 16:04:13 -- scripts/common.sh@352 -- # local d=9 00:27:10.683 16:04:13 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:27:10.683 16:04:13 -- scripts/common.sh@354 -- # echo 9 00:27:10.683 16:04:13 -- scripts/common.sh@364 -- # ver1[v]=9 00:27:10.683 16:04:13 -- scripts/common.sh@365 -- # decimal 0 00:27:10.683 16:04:13 -- scripts/common.sh@352 -- # local d=0 00:27:10.683 16:04:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:27:10.683 16:04:13 -- scripts/common.sh@354 -- # echo 0 00:27:10.683 16:04:13 -- scripts/common.sh@365 -- # ver2[v]=0 00:27:10.683 16:04:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:10.683 16:04:13 -- scripts/common.sh@366 -- # return 0 00:27:10.683 16:04:13 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:27:10.683 16:04:13 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:27:10.683 16:04:13 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:27:10.683 16:04:13 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:27:10.683 16:04:13 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:27:10.683 16:04:13 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:27:10.683 16:04:13 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:27:10.683 16:04:13 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:27:10.683 16:04:13 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:27:10.683 16:04:13 -- fips/fips.sh@114 -- # build_openssl_config 00:27:10.683 16:04:13 -- fips/fips.sh@37 -- # cat 00:27:10.683 16:04:13 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:27:10.683 16:04:13 -- fips/fips.sh@58 -- # cat - 00:27:10.683 16:04:13 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:27:10.683 16:04:13 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:27:10.683 16:04:13 -- fips/fips.sh@117 -- # mapfile -t providers 00:27:10.683 16:04:13 -- fips/fips.sh@117 -- # grep name 00:27:10.683 16:04:13 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:27:10.683 16:04:13 -- fips/fips.sh@117 -- # openssl list -providers 00:27:10.683 16:04:13 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:27:10.683 16:04:13 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:27:10.683 16:04:13 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:27:10.683 16:04:13 -- fips/fips.sh@128 -- # : 00:27:10.683 16:04:13 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:27:10.683 16:04:13 -- common/autotest_common.sh@640 -- # local es=0 00:27:10.683 16:04:13 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:27:10.683 16:04:13 -- common/autotest_common.sh@628 -- # local arg=openssl 00:27:10.683 16:04:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:10.683 16:04:13 -- common/autotest_common.sh@632 -- # type -t openssl 00:27:10.683 16:04:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:10.683 16:04:13 -- common/autotest_common.sh@634 -- # type -P openssl 00:27:10.683 16:04:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:10.683 16:04:13 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:27:10.683 16:04:13 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:27:10.683 16:04:13 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:27:10.949 Error setting digest 00:27:10.949 0082F85FAC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:27:10.949 0082F85FAC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:27:10.949 16:04:13 -- common/autotest_common.sh@643 -- # es=1 00:27:10.949 16:04:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:10.949 16:04:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:10.949 16:04:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:10.949 16:04:13 -- fips/fips.sh@131 -- # nvmftestinit 00:27:10.949 16:04:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:10.949 16:04:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.949 16:04:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:10.949 16:04:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:10.949 16:04:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:10.949 16:04:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.949 16:04:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.949 16:04:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.949 16:04:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:10.949 16:04:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:10.949 16:04:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:10.949 16:04:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:10.949 16:04:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:10.949 16:04:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:10.949 16:04:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.949 16:04:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.949 16:04:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:10.949 16:04:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:10.949 16:04:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:10.949 16:04:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:10.949 16:04:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:10.949 16:04:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.949 16:04:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:10.949 16:04:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:10.949 16:04:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:10.949 16:04:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:10.949 16:04:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:10.949 16:04:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:10.949 Cannot find device "nvmf_tgt_br" 00:27:10.949 16:04:13 -- nvmf/common.sh@154 -- # true 00:27:10.949 16:04:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:10.949 Cannot find device "nvmf_tgt_br2" 00:27:10.949 16:04:13 -- nvmf/common.sh@155 -- # true 00:27:10.949 16:04:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:10.949 16:04:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:10.949 Cannot find device "nvmf_tgt_br" 00:27:10.949 16:04:13 -- nvmf/common.sh@157 -- # true 00:27:10.949 16:04:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:10.949 Cannot find device "nvmf_tgt_br2" 00:27:10.949 16:04:13 -- nvmf/common.sh@158 -- # true 00:27:10.949 16:04:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:10.949 16:04:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:10.949 16:04:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:10.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:10.949 16:04:13 -- nvmf/common.sh@161 -- # true 00:27:10.949 16:04:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:10.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:10.949 16:04:13 -- nvmf/common.sh@162 -- # true 00:27:10.949 16:04:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:10.949 16:04:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:10.949 16:04:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:10.949 16:04:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:10.949 16:04:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:10.949 16:04:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:10.949 16:04:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:10.949 16:04:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:10.949 16:04:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:11.217 16:04:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:11.217 16:04:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:11.217 16:04:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:11.217 16:04:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:11.217 16:04:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:11.217 16:04:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:11.217 16:04:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:11.217 16:04:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:11.217 16:04:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:11.217 16:04:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:11.217 16:04:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:11.217 16:04:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:11.217 16:04:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:11.217 16:04:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:11.217 16:04:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:11.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:27:11.217 00:27:11.217 --- 10.0.0.2 ping statistics --- 00:27:11.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.217 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:27:11.217 16:04:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:11.217 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:11.217 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:27:11.217 00:27:11.217 --- 10.0.0.3 ping statistics --- 00:27:11.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.217 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:27:11.217 16:04:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:11.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:27:11.217 00:27:11.217 --- 10.0.0.1 ping statistics --- 00:27:11.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.217 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:27:11.217 16:04:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.217 16:04:13 -- nvmf/common.sh@421 -- # return 0 00:27:11.217 16:04:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:11.217 16:04:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.217 16:04:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:11.217 16:04:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:11.217 16:04:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.217 16:04:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:11.217 16:04:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:11.217 16:04:13 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:27:11.217 16:04:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:11.217 16:04:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:11.217 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:27:11.217 16:04:13 -- nvmf/common.sh@469 -- # nvmfpid=65633 00:27:11.217 16:04:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:11.217 16:04:13 -- nvmf/common.sh@470 -- # waitforlisten 65633 00:27:11.217 16:04:13 -- common/autotest_common.sh@819 -- # '[' -z 65633 ']' 00:27:11.217 16:04:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.217 16:04:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:11.217 16:04:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.217 16:04:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:11.217 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:27:11.217 [2024-07-22 16:04:14.057228] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:11.217 [2024-07-22 16:04:14.058074] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.488 [2024-07-22 16:04:14.199876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.488 [2024-07-22 16:04:14.268014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:11.488 [2024-07-22 16:04:14.268179] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.488 [2024-07-22 16:04:14.268196] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.488 [2024-07-22 16:04:14.268207] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.488 [2024-07-22 16:04:14.268248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.442 16:04:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:12.442 16:04:15 -- common/autotest_common.sh@852 -- # return 0 00:27:12.442 16:04:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:12.442 16:04:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:12.442 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:12.442 16:04:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.442 16:04:15 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:27:12.442 16:04:15 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:27:12.442 16:04:15 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:12.442 16:04:15 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:27:12.442 16:04:15 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:12.442 16:04:15 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:12.442 16:04:15 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:12.442 16:04:15 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.442 [2024-07-22 16:04:15.284960] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.442 [2024-07-22 16:04:15.300916] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:12.442 [2024-07-22 16:04:15.301100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.703 malloc0 00:27:12.703 16:04:15 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:12.703 16:04:15 -- fips/fips.sh@148 -- # bdevperf_pid=65673 00:27:12.703 16:04:15 -- fips/fips.sh@149 -- # waitforlisten 65673 /var/tmp/bdevperf.sock 00:27:12.703 16:04:15 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:12.703 16:04:15 -- common/autotest_common.sh@819 -- # '[' -z 65673 ']' 00:27:12.703 16:04:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:12.703 16:04:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:12.703 16:04:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:12.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:12.703 16:04:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:12.703 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:12.703 [2024-07-22 16:04:15.437377] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:12.703 [2024-07-22 16:04:15.437731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65673 ] 00:27:12.962 [2024-07-22 16:04:15.576138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.962 [2024-07-22 16:04:15.645687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.899 16:04:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:13.899 16:04:16 -- common/autotest_common.sh@852 -- # return 0 00:27:13.899 16:04:16 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:13.899 [2024-07-22 16:04:16.652673] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:13.899 TLSTESTn1 00:27:13.899 16:04:16 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:14.157 Running I/O for 10 seconds... 00:27:24.129 00:27:24.129 Latency(us) 00:27:24.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.129 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:24.129 Verification LBA range: start 0x0 length 0x2000 00:27:24.129 TLSTESTn1 : 10.02 5267.74 20.58 0.00 0.00 24260.20 5242.88 29550.78 00:27:24.129 =================================================================================================================== 00:27:24.129 Total : 5267.74 20.58 0.00 0.00 24260.20 5242.88 29550.78 00:27:24.129 0 00:27:24.129 16:04:26 -- fips/fips.sh@1 -- # cleanup 00:27:24.129 16:04:26 -- fips/fips.sh@15 -- # process_shm --id 0 00:27:24.129 16:04:26 -- common/autotest_common.sh@796 -- # type=--id 00:27:24.129 16:04:26 -- common/autotest_common.sh@797 -- # id=0 00:27:24.129 16:04:26 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:27:24.129 16:04:26 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:24.129 16:04:26 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:27:24.129 16:04:26 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:27:24.129 16:04:26 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:27:24.129 16:04:26 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:24.129 nvmf_trace.0 00:27:24.129 16:04:26 -- common/autotest_common.sh@811 -- # return 0 00:27:24.129 16:04:26 -- fips/fips.sh@16 -- # killprocess 65673 00:27:24.129 16:04:26 -- common/autotest_common.sh@926 -- # '[' -z 65673 ']' 00:27:24.129 16:04:26 -- common/autotest_common.sh@930 -- # kill -0 65673 00:27:24.129 16:04:26 -- common/autotest_common.sh@931 -- # uname 00:27:24.129 16:04:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:24.129 16:04:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65673 00:27:24.129 killing process with pid 65673 00:27:24.129 Received shutdown signal, test time was about 10.000000 seconds 00:27:24.129 00:27:24.129 Latency(us) 00:27:24.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.129 =================================================================================================================== 00:27:24.129 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.129 16:04:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:27:24.129 16:04:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:27:24.129 16:04:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65673' 00:27:24.129 16:04:26 -- common/autotest_common.sh@945 -- # kill 65673 00:27:24.129 16:04:26 -- common/autotest_common.sh@950 -- # wait 65673 00:27:24.388 16:04:27 -- fips/fips.sh@17 -- # nvmftestfini 00:27:24.388 16:04:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:24.388 16:04:27 -- nvmf/common.sh@116 -- # sync 00:27:24.388 16:04:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:24.388 16:04:27 -- nvmf/common.sh@119 -- # set +e 00:27:24.388 16:04:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:24.388 16:04:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:24.388 rmmod nvme_tcp 00:27:24.388 rmmod nvme_fabrics 00:27:24.388 rmmod nvme_keyring 00:27:24.388 16:04:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:24.646 16:04:27 -- nvmf/common.sh@123 -- # set -e 00:27:24.646 16:04:27 -- nvmf/common.sh@124 -- # return 0 00:27:24.646 16:04:27 -- nvmf/common.sh@477 -- # '[' -n 65633 ']' 00:27:24.646 16:04:27 -- nvmf/common.sh@478 -- # killprocess 65633 00:27:24.646 16:04:27 -- common/autotest_common.sh@926 -- # '[' -z 65633 ']' 00:27:24.646 16:04:27 -- common/autotest_common.sh@930 -- # kill -0 65633 00:27:24.646 16:04:27 -- common/autotest_common.sh@931 -- # uname 00:27:24.646 16:04:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:24.646 16:04:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65633 00:27:24.646 killing process with pid 65633 00:27:24.646 16:04:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:24.646 16:04:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:24.646 16:04:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65633' 00:27:24.646 16:04:27 -- common/autotest_common.sh@945 -- # kill 65633 00:27:24.646 16:04:27 -- common/autotest_common.sh@950 -- # wait 65633 00:27:24.646 16:04:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:24.646 16:04:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:24.646 16:04:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:24.646 16:04:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:24.646 16:04:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:24.646 16:04:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.646 16:04:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.646 16:04:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.646 16:04:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:24.646 16:04:27 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:24.646 ************************************ 00:27:24.646 END TEST nvmf_fips 00:27:24.646 ************************************ 00:27:24.646 00:27:24.646 real 0m14.214s 00:27:24.646 user 0m19.458s 00:27:24.646 sys 0m5.632s 00:27:24.646 16:04:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.646 16:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:24.905 16:04:27 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:27:24.905 16:04:27 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:24.905 16:04:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:24.905 16:04:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:24.905 16:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:24.905 ************************************ 00:27:24.905 START TEST nvmf_fuzz 00:27:24.905 ************************************ 00:27:24.905 16:04:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:24.905 * Looking for test storage... 00:27:24.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:24.905 16:04:27 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:24.905 16:04:27 -- nvmf/common.sh@7 -- # uname -s 00:27:24.905 16:04:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.905 16:04:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.905 16:04:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.905 16:04:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.905 16:04:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.905 16:04:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.905 16:04:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.905 16:04:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.905 16:04:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.905 16:04:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.905 16:04:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:27:24.905 16:04:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:27:24.905 16:04:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.905 16:04:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.905 16:04:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:24.905 16:04:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:24.905 16:04:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.905 16:04:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.905 16:04:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.905 16:04:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.905 16:04:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.905 16:04:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.906 16:04:27 -- paths/export.sh@5 -- # export PATH 00:27:24.906 16:04:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.906 16:04:27 -- nvmf/common.sh@46 -- # : 0 00:27:24.906 16:04:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:24.906 16:04:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:24.906 16:04:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:24.906 16:04:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.906 16:04:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.906 16:04:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:24.906 16:04:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:24.906 16:04:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:24.906 16:04:27 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:27:24.906 16:04:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:24.906 16:04:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.906 16:04:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:24.906 16:04:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:24.906 16:04:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:24.906 16:04:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.906 16:04:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.906 16:04:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.906 16:04:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:24.906 16:04:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:24.906 16:04:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:24.906 16:04:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:24.906 16:04:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:24.906 16:04:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:24.906 16:04:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.906 16:04:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.906 16:04:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:24.906 16:04:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:24.906 16:04:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:24.906 16:04:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:24.906 16:04:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:24.906 16:04:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.906 16:04:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:24.906 16:04:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:24.906 16:04:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:24.906 16:04:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:24.906 16:04:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:24.906 16:04:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:24.906 Cannot find device "nvmf_tgt_br" 00:27:24.906 16:04:27 -- nvmf/common.sh@154 -- # true 00:27:24.906 16:04:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:24.906 Cannot find device "nvmf_tgt_br2" 00:27:24.906 16:04:27 -- nvmf/common.sh@155 -- # true 00:27:24.906 16:04:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:24.906 16:04:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:24.906 Cannot find device "nvmf_tgt_br" 00:27:24.906 16:04:27 -- nvmf/common.sh@157 -- # true 00:27:24.906 16:04:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:24.906 Cannot find device "nvmf_tgt_br2" 00:27:24.906 16:04:27 -- nvmf/common.sh@158 -- # true 00:27:24.906 16:04:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:24.906 16:04:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:25.165 16:04:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:25.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.165 16:04:27 -- nvmf/common.sh@161 -- # true 00:27:25.165 16:04:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:25.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.165 16:04:27 -- nvmf/common.sh@162 -- # true 00:27:25.165 16:04:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:25.165 16:04:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:25.165 16:04:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:25.165 16:04:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:25.165 16:04:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:25.165 16:04:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:25.165 16:04:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:25.165 16:04:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:25.165 16:04:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:25.165 16:04:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:25.165 16:04:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:25.165 16:04:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:25.165 16:04:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:25.165 16:04:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:25.165 16:04:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:25.165 16:04:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:25.165 16:04:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:25.165 16:04:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:25.165 16:04:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:25.165 16:04:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:25.165 16:04:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:25.165 16:04:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:25.165 16:04:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:25.165 16:04:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:25.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:27:25.165 00:27:25.165 --- 10.0.0.2 ping statistics --- 00:27:25.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.165 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:27:25.165 16:04:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:25.165 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:25.165 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:27:25.165 00:27:25.165 --- 10.0.0.3 ping statistics --- 00:27:25.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.165 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:25.165 16:04:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:25.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:27:25.165 00:27:25.165 --- 10.0.0.1 ping statistics --- 00:27:25.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.165 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:25.165 16:04:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.166 16:04:27 -- nvmf/common.sh@421 -- # return 0 00:27:25.166 16:04:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:25.166 16:04:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.166 16:04:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:25.166 16:04:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:25.166 16:04:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.166 16:04:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:25.166 16:04:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:25.166 16:04:27 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=65998 00:27:25.166 16:04:27 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:25.166 16:04:27 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:25.166 16:04:27 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 65998 00:27:25.166 16:04:27 -- common/autotest_common.sh@819 -- # '[' -z 65998 ']' 00:27:25.166 16:04:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.166 16:04:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:25.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.166 16:04:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.166 16:04:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:25.166 16:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:26.542 16:04:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:26.542 16:04:29 -- common/autotest_common.sh@852 -- # return 0 00:27:26.542 16:04:29 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:26.542 16:04:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.542 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:27:26.542 16:04:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.542 16:04:29 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:27:26.542 16:04:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.542 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:27:26.542 Malloc0 00:27:26.542 16:04:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.542 16:04:29 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.542 16:04:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.542 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:27:26.542 16:04:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.542 16:04:29 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:26.542 16:04:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.542 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:27:26.542 16:04:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.542 16:04:29 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.542 16:04:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.542 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:27:26.542 16:04:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.542 16:04:29 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:27:26.542 16:04:29 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:26.801 Shutting down the fuzz application 00:27:26.801 16:04:29 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:27.060 Shutting down the fuzz application 00:27:27.060 16:04:29 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:27.060 16:04:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.060 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:27:27.060 16:04:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.060 16:04:29 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:27.060 16:04:29 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:27.060 16:04:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:27.060 16:04:29 -- nvmf/common.sh@116 -- # sync 00:27:27.318 16:04:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:27.318 16:04:29 -- nvmf/common.sh@119 -- # set +e 00:27:27.318 16:04:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:27.318 16:04:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:27.318 rmmod nvme_tcp 00:27:27.318 rmmod nvme_fabrics 00:27:27.318 rmmod nvme_keyring 00:27:27.318 16:04:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:27.318 16:04:29 -- nvmf/common.sh@123 -- # set -e 00:27:27.318 16:04:29 -- nvmf/common.sh@124 -- # return 0 00:27:27.318 16:04:30 -- nvmf/common.sh@477 -- # '[' -n 65998 ']' 00:27:27.318 16:04:30 -- nvmf/common.sh@478 -- # killprocess 65998 00:27:27.318 16:04:30 -- common/autotest_common.sh@926 -- # '[' -z 65998 ']' 00:27:27.318 16:04:30 -- common/autotest_common.sh@930 -- # kill -0 65998 00:27:27.318 16:04:30 -- common/autotest_common.sh@931 -- # uname 00:27:27.318 16:04:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:27.318 16:04:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65998 00:27:27.318 killing process with pid 65998 00:27:27.318 16:04:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:27.318 16:04:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:27.318 16:04:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65998' 00:27:27.318 16:04:30 -- common/autotest_common.sh@945 -- # kill 65998 00:27:27.318 16:04:30 -- common/autotest_common.sh@950 -- # wait 65998 00:27:27.577 16:04:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:27.577 16:04:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:27.577 16:04:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:27.577 16:04:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.577 16:04:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:27.577 16:04:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.577 16:04:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.577 16:04:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.577 16:04:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:27.577 16:04:30 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:27:27.577 00:27:27.577 real 0m2.715s 00:27:27.577 user 0m3.040s 00:27:27.577 sys 0m0.536s 00:27:27.577 16:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.577 16:04:30 -- common/autotest_common.sh@10 -- # set +x 00:27:27.577 ************************************ 00:27:27.577 END TEST nvmf_fuzz 00:27:27.577 ************************************ 00:27:27.577 16:04:30 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:27.577 16:04:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:27.577 16:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:27.577 16:04:30 -- common/autotest_common.sh@10 -- # set +x 00:27:27.577 ************************************ 00:27:27.577 START TEST nvmf_multiconnection 00:27:27.577 ************************************ 00:27:27.577 16:04:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:27.577 * Looking for test storage... 00:27:27.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:27.577 16:04:30 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:27.577 16:04:30 -- nvmf/common.sh@7 -- # uname -s 00:27:27.577 16:04:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.577 16:04:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.577 16:04:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.577 16:04:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.577 16:04:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.577 16:04:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.577 16:04:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.577 16:04:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.577 16:04:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.577 16:04:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.577 16:04:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:27:27.577 16:04:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:27:27.577 16:04:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.577 16:04:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.577 16:04:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:27.577 16:04:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:27.577 16:04:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.577 16:04:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.577 16:04:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.577 16:04:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.577 16:04:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.577 16:04:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.577 16:04:30 -- paths/export.sh@5 -- # export PATH 00:27:27.577 16:04:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.577 16:04:30 -- nvmf/common.sh@46 -- # : 0 00:27:27.577 16:04:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:27.577 16:04:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:27.577 16:04:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:27.577 16:04:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.577 16:04:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.577 16:04:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:27.577 16:04:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:27.577 16:04:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:27.577 16:04:30 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:27.577 16:04:30 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:27.577 16:04:30 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:27.577 16:04:30 -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:27.577 16:04:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:27.577 16:04:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.577 16:04:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:27.577 16:04:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:27.577 16:04:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:27.577 16:04:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.577 16:04:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.577 16:04:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.577 16:04:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:27.577 16:04:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:27.577 16:04:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:27.577 16:04:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:27.577 16:04:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:27.577 16:04:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:27.577 16:04:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.577 16:04:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.577 16:04:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:27.577 16:04:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:27.577 16:04:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:27.577 16:04:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:27.577 16:04:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:27.577 16:04:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.577 16:04:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:27.577 16:04:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:27.577 16:04:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:27.577 16:04:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:27.577 16:04:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:27.835 16:04:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:27.835 Cannot find device "nvmf_tgt_br" 00:27:27.835 16:04:30 -- nvmf/common.sh@154 -- # true 00:27:27.835 16:04:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:27.835 Cannot find device "nvmf_tgt_br2" 00:27:27.835 16:04:30 -- nvmf/common.sh@155 -- # true 00:27:27.836 16:04:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:27.836 16:04:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:27.836 Cannot find device "nvmf_tgt_br" 00:27:27.836 16:04:30 -- nvmf/common.sh@157 -- # true 00:27:27.836 16:04:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:27.836 Cannot find device "nvmf_tgt_br2" 00:27:27.836 16:04:30 -- nvmf/common.sh@158 -- # true 00:27:27.836 16:04:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:27.836 16:04:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:27.836 16:04:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:27.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:27.836 16:04:30 -- nvmf/common.sh@161 -- # true 00:27:27.836 16:04:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:27.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:27.836 16:04:30 -- nvmf/common.sh@162 -- # true 00:27:27.836 16:04:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:27.836 16:04:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:27.836 16:04:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:27.836 16:04:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:27.836 16:04:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:27.836 16:04:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:27.836 16:04:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:27.836 16:04:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:27.836 16:04:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:27.836 16:04:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:27.836 16:04:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:27.836 16:04:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:27.836 16:04:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:27.836 16:04:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:27.836 16:04:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:27.836 16:04:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:27.836 16:04:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:27.836 16:04:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:27.836 16:04:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:27.836 16:04:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:27.836 16:04:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:27.836 16:04:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:27.836 16:04:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:28.094 16:04:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:28.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:27:28.094 00:27:28.094 --- 10.0.0.2 ping statistics --- 00:27:28.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.094 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:27:28.094 16:04:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:28.094 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:28.094 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:27:28.094 00:27:28.094 --- 10.0.0.3 ping statistics --- 00:27:28.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.094 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:27:28.094 16:04:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:28.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:27:28.094 00:27:28.094 --- 10.0.0.1 ping statistics --- 00:27:28.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.094 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:28.094 16:04:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.094 16:04:30 -- nvmf/common.sh@421 -- # return 0 00:27:28.094 16:04:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:28.094 16:04:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.095 16:04:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:28.095 16:04:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:28.095 16:04:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.095 16:04:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:28.095 16:04:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:28.095 16:04:30 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:28.095 16:04:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:28.095 16:04:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:28.095 16:04:30 -- common/autotest_common.sh@10 -- # set +x 00:27:28.095 16:04:30 -- nvmf/common.sh@469 -- # nvmfpid=66195 00:27:28.095 16:04:30 -- nvmf/common.sh@470 -- # waitforlisten 66195 00:27:28.095 16:04:30 -- common/autotest_common.sh@819 -- # '[' -z 66195 ']' 00:27:28.095 16:04:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:28.095 16:04:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.095 16:04:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:28.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.095 16:04:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.095 16:04:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:28.095 16:04:30 -- common/autotest_common.sh@10 -- # set +x 00:27:28.095 [2024-07-22 16:04:30.790871] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:28.095 [2024-07-22 16:04:30.790975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.095 [2024-07-22 16:04:30.928562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.353 [2024-07-22 16:04:30.988007] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:28.353 [2024-07-22 16:04:30.988328] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.353 [2024-07-22 16:04:30.988384] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.353 [2024-07-22 16:04:30.988556] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.353 [2024-07-22 16:04:30.988895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.353 [2024-07-22 16:04:30.989088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.353 [2024-07-22 16:04:30.989082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.353 [2024-07-22 16:04:30.989038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.918 16:04:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:28.918 16:04:31 -- common/autotest_common.sh@852 -- # return 0 00:27:28.918 16:04:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:28.918 16:04:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:28.918 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.177 16:04:31 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 [2024-07-22 16:04:31.807972] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@21 -- # seq 1 11 00:27:29.177 16:04:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.177 16:04:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 Malloc1 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 [2024-07-22 16:04:31.875864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.177 16:04:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 Malloc2 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.177 16:04:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 Malloc3 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.177 16:04:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 Malloc4 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.177 16:04:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:29.177 16:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 Malloc5 00:27:29.177 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:29.177 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:29.177 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:29.177 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.177 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.177 16:04:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.177 16:04:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:29.177 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.177 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.436 Malloc6 00:27:29.436 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.436 16:04:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:29.436 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.436 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.436 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.436 16:04:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:29.436 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.436 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.436 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.436 16:04:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:29.436 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.436 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.436 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.436 16:04:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.436 16:04:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:29.436 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.436 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.436 Malloc7 00:27:29.436 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.436 16:04:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:29.436 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.436 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.436 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.436 16:04:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.437 16:04:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 Malloc8 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.437 16:04:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 Malloc9 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.437 16:04:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 Malloc10 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.437 16:04:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 Malloc11 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:29.437 16:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.437 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:29.437 16:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.437 16:04:32 -- target/multiconnection.sh@28 -- # seq 1 11 00:27:29.437 16:04:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.437 16:04:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:29.695 16:04:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:29.695 16:04:32 -- common/autotest_common.sh@1177 -- # local i=0 00:27:29.695 16:04:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:29.695 16:04:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:29.695 16:04:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:31.597 16:04:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:31.597 16:04:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:31.597 16:04:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:27:31.597 16:04:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:31.597 16:04:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:31.597 16:04:34 -- common/autotest_common.sh@1187 -- # return 0 00:27:31.597 16:04:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.597 16:04:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:31.855 16:04:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:31.855 16:04:34 -- common/autotest_common.sh@1177 -- # local i=0 00:27:31.855 16:04:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:31.855 16:04:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:31.855 16:04:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:33.754 16:04:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:33.754 16:04:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:27:33.754 16:04:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:33.754 16:04:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:33.754 16:04:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:33.754 16:04:36 -- common/autotest_common.sh@1187 -- # return 0 00:27:33.754 16:04:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:33.754 16:04:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:34.012 16:04:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:34.012 16:04:36 -- common/autotest_common.sh@1177 -- # local i=0 00:27:34.012 16:04:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:34.012 16:04:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:34.012 16:04:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:35.914 16:04:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:35.914 16:04:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:35.914 16:04:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:27:35.914 16:04:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:35.914 16:04:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:35.914 16:04:38 -- common/autotest_common.sh@1187 -- # return 0 00:27:35.914 16:04:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:35.914 16:04:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:36.171 16:04:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:36.172 16:04:38 -- common/autotest_common.sh@1177 -- # local i=0 00:27:36.172 16:04:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:36.172 16:04:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:36.172 16:04:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:38.071 16:04:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:38.071 16:04:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:38.071 16:04:40 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:27:38.071 16:04:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:38.071 16:04:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:38.071 16:04:40 -- common/autotest_common.sh@1187 -- # return 0 00:27:38.071 16:04:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.071 16:04:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:38.329 16:04:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:38.329 16:04:41 -- common/autotest_common.sh@1177 -- # local i=0 00:27:38.329 16:04:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:38.329 16:04:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:38.329 16:04:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:40.227 16:04:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:40.227 16:04:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:40.227 16:04:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:27:40.227 16:04:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:40.227 16:04:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:40.227 16:04:43 -- common/autotest_common.sh@1187 -- # return 0 00:27:40.227 16:04:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.227 16:04:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:40.484 16:04:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:40.484 16:04:43 -- common/autotest_common.sh@1177 -- # local i=0 00:27:40.484 16:04:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:40.484 16:04:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:40.484 16:04:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:42.382 16:04:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:42.382 16:04:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:42.382 16:04:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:27:42.382 16:04:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:42.382 16:04:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:42.382 16:04:45 -- common/autotest_common.sh@1187 -- # return 0 00:27:42.382 16:04:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:42.382 16:04:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:42.640 16:04:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:42.640 16:04:45 -- common/autotest_common.sh@1177 -- # local i=0 00:27:42.640 16:04:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:42.640 16:04:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:42.640 16:04:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:44.539 16:04:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:44.539 16:04:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:44.539 16:04:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:27:44.539 16:04:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:44.539 16:04:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:44.539 16:04:47 -- common/autotest_common.sh@1187 -- # return 0 00:27:44.539 16:04:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:44.539 16:04:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:44.797 16:04:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:44.797 16:04:47 -- common/autotest_common.sh@1177 -- # local i=0 00:27:44.797 16:04:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:44.797 16:04:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:44.797 16:04:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:46.696 16:04:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:46.696 16:04:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:46.696 16:04:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:27:46.696 16:04:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:46.696 16:04:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:46.696 16:04:49 -- common/autotest_common.sh@1187 -- # return 0 00:27:46.696 16:04:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:46.696 16:04:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:46.954 16:04:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:46.954 16:04:49 -- common/autotest_common.sh@1177 -- # local i=0 00:27:46.954 16:04:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:46.954 16:04:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:46.954 16:04:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:48.881 16:04:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:48.881 16:04:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:48.881 16:04:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:27:48.881 16:04:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:48.881 16:04:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:48.881 16:04:51 -- common/autotest_common.sh@1187 -- # return 0 00:27:48.881 16:04:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.881 16:04:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:49.139 16:04:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:49.139 16:04:51 -- common/autotest_common.sh@1177 -- # local i=0 00:27:49.139 16:04:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:49.139 16:04:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:49.139 16:04:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:51.043 16:04:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:51.043 16:04:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:27:51.043 16:04:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:51.043 16:04:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:51.043 16:04:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:51.043 16:04:53 -- common/autotest_common.sh@1187 -- # return 0 00:27:51.043 16:04:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:51.043 16:04:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:51.301 16:04:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:51.301 16:04:53 -- common/autotest_common.sh@1177 -- # local i=0 00:27:51.301 16:04:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:51.301 16:04:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:51.301 16:04:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:53.202 16:04:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:53.202 16:04:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:53.202 16:04:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:27:53.202 16:04:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:53.202 16:04:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:53.202 16:04:55 -- common/autotest_common.sh@1187 -- # return 0 00:27:53.202 16:04:55 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:53.202 [global] 00:27:53.202 thread=1 00:27:53.202 invalidate=1 00:27:53.202 rw=read 00:27:53.202 time_based=1 00:27:53.202 runtime=10 00:27:53.202 ioengine=libaio 00:27:53.202 direct=1 00:27:53.202 bs=262144 00:27:53.202 iodepth=64 00:27:53.202 norandommap=1 00:27:53.202 numjobs=1 00:27:53.202 00:27:53.202 [job0] 00:27:53.202 filename=/dev/nvme0n1 00:27:53.202 [job1] 00:27:53.202 filename=/dev/nvme10n1 00:27:53.202 [job2] 00:27:53.202 filename=/dev/nvme1n1 00:27:53.202 [job3] 00:27:53.202 filename=/dev/nvme2n1 00:27:53.202 [job4] 00:27:53.202 filename=/dev/nvme3n1 00:27:53.202 [job5] 00:27:53.202 filename=/dev/nvme4n1 00:27:53.202 [job6] 00:27:53.202 filename=/dev/nvme5n1 00:27:53.202 [job7] 00:27:53.202 filename=/dev/nvme6n1 00:27:53.202 [job8] 00:27:53.202 filename=/dev/nvme7n1 00:27:53.202 [job9] 00:27:53.202 filename=/dev/nvme8n1 00:27:53.202 [job10] 00:27:53.202 filename=/dev/nvme9n1 00:27:53.460 Could not set queue depth (nvme0n1) 00:27:53.460 Could not set queue depth (nvme10n1) 00:27:53.460 Could not set queue depth (nvme1n1) 00:27:53.460 Could not set queue depth (nvme2n1) 00:27:53.460 Could not set queue depth (nvme3n1) 00:27:53.460 Could not set queue depth (nvme4n1) 00:27:53.460 Could not set queue depth (nvme5n1) 00:27:53.460 Could not set queue depth (nvme6n1) 00:27:53.460 Could not set queue depth (nvme7n1) 00:27:53.460 Could not set queue depth (nvme8n1) 00:27:53.460 Could not set queue depth (nvme9n1) 00:27:53.460 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.460 fio-3.35 00:27:53.460 Starting 11 threads 00:28:05.666 00:28:05.666 job0: (groupid=0, jobs=1): err= 0: pid=66653: Mon Jul 22 16:05:06 2024 00:28:05.666 read: IOPS=1197, BW=299MiB/s (314MB/s)(3024MiB/10096msec) 00:28:05.666 slat (usec): min=16, max=120379, avg=805.54, stdev=2853.49 00:28:05.666 clat (usec): min=1150, max=229958, avg=52506.94, stdev=44793.87 00:28:05.666 lat (usec): min=1196, max=262939, avg=53312.48, stdev=45466.97 00:28:05.666 clat percentiles (msec): 00:28:05.666 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:28:05.666 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:28:05.666 | 70.00th=[ 36], 80.00th=[ 41], 90.00th=[ 136], 95.00th=[ 146], 00:28:05.666 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 220], 99.95th=[ 226], 00:28:05.666 | 99.99th=[ 230] 00:28:05.666 bw ( KiB/s): min=93184, max=525312, per=16.16%, avg=307966.25, stdev=186194.92, samples=20 00:28:05.666 iops : min= 364, max= 2052, avg=1202.95, stdev=727.31, samples=20 00:28:05.666 lat (msec) : 2=0.02%, 4=0.11%, 10=0.95%, 20=1.03%, 50=79.48% 00:28:05.666 lat (msec) : 100=0.05%, 250=18.36% 00:28:05.666 cpu : usr=0.52%, sys=3.66%, ctx=2794, majf=0, minf=4097 00:28:05.666 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:28:05.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.666 issued rwts: total=12094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.666 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.666 job1: (groupid=0, jobs=1): err= 0: pid=66654: Mon Jul 22 16:05:06 2024 00:28:05.666 read: IOPS=482, BW=121MiB/s (127MB/s)(1217MiB/10079msec) 00:28:05.666 slat (usec): min=18, max=88259, avg=2049.48, stdev=5998.77 00:28:05.666 clat (msec): min=15, max=261, avg=130.27, stdev=32.02 00:28:05.666 lat (msec): min=16, max=285, avg=132.32, stdev=32.77 00:28:05.666 clat percentiles (msec): 00:28:05.666 | 1.00th=[ 95], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 109], 00:28:05.666 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 121], 00:28:05.666 | 70.00th=[ 128], 80.00th=[ 171], 90.00th=[ 188], 95.00th=[ 194], 00:28:05.666 | 99.00th=[ 211], 99.50th=[ 213], 99.90th=[ 259], 99.95th=[ 262], 00:28:05.666 | 99.99th=[ 262] 00:28:05.666 bw ( KiB/s): min=72047, max=149716, per=6.45%, avg=122915.95, stdev=27203.24, samples=20 00:28:05.666 iops : min= 281, max= 584, avg=479.90, stdev=106.22, samples=20 00:28:05.666 lat (msec) : 20=0.10%, 50=0.10%, 100=4.52%, 250=95.13%, 500=0.14% 00:28:05.666 cpu : usr=0.20%, sys=2.14%, ctx=1111, majf=0, minf=4097 00:28:05.666 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:05.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.666 issued rwts: total=4868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.666 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.666 job2: (groupid=0, jobs=1): err= 0: pid=66655: Mon Jul 22 16:05:06 2024 00:28:05.666 read: IOPS=479, BW=120MiB/s (126MB/s)(1208MiB/10073msec) 00:28:05.666 slat (usec): min=18, max=106674, avg=2065.33, stdev=5817.63 00:28:05.666 clat (msec): min=26, max=283, avg=131.08, stdev=32.43 00:28:05.666 lat (msec): min=28, max=292, avg=133.14, stdev=33.13 00:28:05.666 clat percentiles (msec): 00:28:05.666 | 1.00th=[ 94], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 109], 00:28:05.666 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 123], 00:28:05.666 | 70.00th=[ 130], 80.00th=[ 171], 90.00th=[ 188], 95.00th=[ 194], 00:28:05.666 | 99.00th=[ 213], 99.50th=[ 222], 99.90th=[ 239], 99.95th=[ 241], 00:28:05.666 | 99.99th=[ 284] 00:28:05.666 bw ( KiB/s): min=73216, max=151760, per=6.40%, avg=122001.80, stdev=27514.40, samples=20 00:28:05.667 iops : min= 286, max= 592, avg=476.35, stdev=107.39, samples=20 00:28:05.667 lat (msec) : 50=0.37%, 100=4.24%, 250=95.36%, 500=0.02% 00:28:05.667 cpu : usr=0.23%, sys=1.63%, ctx=1203, majf=0, minf=4097 00:28:05.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:05.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.667 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.667 job3: (groupid=0, jobs=1): err= 0: pid=66656: Mon Jul 22 16:05:06 2024 00:28:05.667 read: IOPS=1064, BW=266MiB/s (279MB/s)(2664MiB/10013msec) 00:28:05.667 slat (usec): min=18, max=61012, avg=927.64, stdev=2246.45 00:28:05.667 clat (msec): min=7, max=119, avg=59.12, stdev= 9.83 00:28:05.667 lat (msec): min=7, max=119, avg=60.05, stdev= 9.86 00:28:05.667 clat percentiles (msec): 00:28:05.667 | 1.00th=[ 34], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 55], 00:28:05.667 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:28:05.667 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 68], 95.00th=[ 74], 00:28:05.667 | 99.00th=[ 104], 99.50th=[ 112], 99.90th=[ 117], 99.95th=[ 118], 00:28:05.667 | 99.99th=[ 121] 00:28:05.667 bw ( KiB/s): min=175265, max=296448, per=14.27%, avg=272052.16, stdev=28756.15, samples=19 00:28:05.667 iops : min= 684, max= 1158, avg=1062.37, stdev=112.40, samples=19 00:28:05.667 lat (msec) : 10=0.05%, 20=0.17%, 50=5.57%, 100=93.00%, 250=1.22% 00:28:05.667 cpu : usr=0.41%, sys=3.43%, ctx=2354, majf=0, minf=4097 00:28:05.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:05.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.667 issued rwts: total=10655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.667 job4: (groupid=0, jobs=1): err= 0: pid=66657: Mon Jul 22 16:05:06 2024 00:28:05.667 read: IOPS=448, BW=112MiB/s (118MB/s)(1133MiB/10102msec) 00:28:05.667 slat (usec): min=17, max=122595, avg=2164.70, stdev=6201.62 00:28:05.667 clat (msec): min=41, max=294, avg=140.19, stdev=30.27 00:28:05.667 lat (msec): min=42, max=311, avg=142.36, stdev=31.09 00:28:05.667 clat percentiles (msec): 00:28:05.667 | 1.00th=[ 67], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 117], 00:28:05.667 | 30.00th=[ 123], 40.00th=[ 130], 50.00th=[ 134], 60.00th=[ 138], 00:28:05.667 | 70.00th=[ 144], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 197], 00:28:05.667 | 99.00th=[ 211], 99.50th=[ 222], 99.90th=[ 247], 99.95th=[ 249], 00:28:05.667 | 99.99th=[ 296] 00:28:05.667 bw ( KiB/s): min=72849, max=151040, per=6.00%, avg=114315.00, stdev=23472.08, samples=20 00:28:05.667 iops : min= 284, max= 590, avg=446.45, stdev=91.80, samples=20 00:28:05.667 lat (msec) : 50=0.44%, 100=1.66%, 250=97.86%, 500=0.04% 00:28:05.667 cpu : usr=0.16%, sys=1.62%, ctx=1211, majf=0, minf=4097 00:28:05.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:05.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.667 issued rwts: total=4531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.667 job5: (groupid=0, jobs=1): err= 0: pid=66658: Mon Jul 22 16:05:06 2024 00:28:05.667 read: IOPS=624, BW=156MiB/s (164MB/s)(1577MiB/10092msec) 00:28:05.667 slat (usec): min=18, max=42646, avg=1582.92, stdev=3745.95 00:28:05.667 clat (msec): min=29, max=231, avg=100.75, stdev=32.95 00:28:05.667 lat (msec): min=30, max=231, avg=102.33, stdev=33.46 00:28:05.667 clat percentiles (msec): 00:28:05.667 | 1.00th=[ 51], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 64], 00:28:05.667 | 30.00th=[ 69], 40.00th=[ 86], 50.00th=[ 111], 60.00th=[ 118], 00:28:05.667 | 70.00th=[ 127], 80.00th=[ 134], 90.00th=[ 140], 95.00th=[ 144], 00:28:05.667 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 222], 99.95th=[ 232], 00:28:05.667 | 99.99th=[ 232] 00:28:05.667 bw ( KiB/s): min=110592, max=272896, per=8.38%, avg=159717.25, stdev=55354.77, samples=20 00:28:05.667 iops : min= 432, max= 1066, avg=623.80, stdev=216.14, samples=20 00:28:05.667 lat (msec) : 50=0.78%, 100=42.52%, 250=56.71% 00:28:05.667 cpu : usr=0.36%, sys=2.69%, ctx=1405, majf=0, minf=4097 00:28:05.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:05.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.667 issued rwts: total=6306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.667 job6: (groupid=0, jobs=1): err= 0: pid=66659: Mon Jul 22 16:05:06 2024 00:28:05.667 read: IOPS=483, BW=121MiB/s (127MB/s)(1218MiB/10076msec) 00:28:05.667 slat (usec): min=18, max=85511, avg=2050.17, stdev=5802.76 00:28:05.667 clat (msec): min=57, max=256, avg=130.10, stdev=32.40 00:28:05.667 lat (msec): min=58, max=271, avg=132.15, stdev=33.11 00:28:05.667 clat percentiles (msec): 00:28:05.667 | 1.00th=[ 69], 5.00th=[ 100], 10.00th=[ 105], 20.00th=[ 109], 00:28:05.667 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 122], 00:28:05.667 | 70.00th=[ 129], 80.00th=[ 171], 90.00th=[ 188], 95.00th=[ 197], 00:28:05.667 | 99.00th=[ 209], 99.50th=[ 218], 99.90th=[ 255], 99.95th=[ 255], 00:28:05.667 | 99.99th=[ 257] 00:28:05.667 bw ( KiB/s): min=80896, max=149205, per=6.45%, avg=122999.15, stdev=26634.66, samples=20 00:28:05.667 iops : min= 316, max= 582, avg=480.20, stdev=104.03, samples=20 00:28:05.667 lat (msec) : 100=5.71%, 250=94.15%, 500=0.14% 00:28:05.667 cpu : usr=0.25%, sys=1.95%, ctx=1140, majf=0, minf=4097 00:28:05.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:05.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.667 issued rwts: total=4871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.667 job7: (groupid=0, jobs=1): err= 0: pid=66660: Mon Jul 22 16:05:06 2024 00:28:05.667 read: IOPS=1139, BW=285MiB/s (299MB/s)(2853MiB/10017msec) 00:28:05.667 slat (usec): min=18, max=32854, avg=860.93, stdev=2020.57 00:28:05.667 clat (usec): min=4479, max=99974, avg=55222.91, stdev=10940.24 00:28:05.667 lat (msec): min=4, max=100, avg=56.08, stdev=11.06 00:28:05.667 clat percentiles (msec): 00:28:05.667 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 53], 00:28:05.667 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:28:05.667 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 66], 95.00th=[ 69], 00:28:05.667 | 99.00th=[ 80], 99.50th=[ 84], 99.90th=[ 92], 99.95th=[ 100], 00:28:05.667 | 99.99th=[ 101] 00:28:05.667 bw ( KiB/s): min=240640, max=464990, per=15.23%, avg=290282.15, stdev=48400.08, samples=20 00:28:05.667 iops : min= 940, max= 1816, avg=1133.65, stdev=189.04, samples=20 00:28:05.667 lat (msec) : 10=0.09%, 20=0.89%, 50=14.82%, 100=84.21% 00:28:05.667 cpu : usr=0.61%, sys=3.92%, ctx=2484, majf=0, minf=4097 00:28:05.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:28:05.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.667 issued rwts: total=11412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.667 job8: (groupid=0, jobs=1): err= 0: pid=66661: Mon Jul 22 16:05:06 2024 00:28:05.667 read: IOPS=446, BW=112MiB/s (117MB/s)(1127MiB/10102msec) 00:28:05.667 slat (usec): min=18, max=108897, avg=2186.28, stdev=5738.63 00:28:05.667 clat (msec): min=40, max=270, avg=140.85, stdev=29.84 00:28:05.667 lat (msec): min=41, max=286, avg=143.03, stdev=30.50 00:28:05.667 clat percentiles (msec): 00:28:05.667 | 1.00th=[ 65], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 118], 00:28:05.667 | 30.00th=[ 124], 40.00th=[ 130], 50.00th=[ 134], 60.00th=[ 138], 00:28:05.667 | 70.00th=[ 144], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 197], 00:28:05.667 | 99.00th=[ 215], 99.50th=[ 220], 99.90th=[ 234], 99.95th=[ 243], 00:28:05.667 | 99.99th=[ 271] 00:28:05.667 bw ( KiB/s): min=78336, max=146432, per=5.97%, avg=113735.10, stdev=22582.69, samples=20 00:28:05.667 iops : min= 306, max= 572, avg=444.20, stdev=88.30, samples=20 00:28:05.667 lat (msec) : 50=0.20%, 100=1.71%, 250=98.07%, 500=0.02% 00:28:05.667 cpu : usr=0.21%, sys=1.48%, ctx=1211, majf=0, minf=4097 00:28:05.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:05.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.667 issued rwts: total=4509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.667 job9: (groupid=0, jobs=1): err= 0: pid=66662: Mon Jul 22 16:05:06 2024 00:28:05.667 read: IOPS=626, BW=157MiB/s (164MB/s)(1583MiB/10105msec) 00:28:05.667 slat (usec): min=18, max=41518, avg=1579.10, stdev=3708.95 00:28:05.667 clat (msec): min=22, max=237, avg=100.40, stdev=33.14 00:28:05.667 lat (msec): min=22, max=251, avg=101.98, stdev=33.66 00:28:05.667 clat percentiles (msec): 00:28:05.667 | 1.00th=[ 50], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 63], 00:28:05.667 | 30.00th=[ 68], 40.00th=[ 86], 50.00th=[ 111], 60.00th=[ 120], 00:28:05.667 | 70.00th=[ 127], 80.00th=[ 134], 90.00th=[ 138], 95.00th=[ 144], 00:28:05.667 | 99.00th=[ 153], 99.50th=[ 171], 99.90th=[ 220], 99.95th=[ 220], 00:28:05.667 | 99.99th=[ 239] 00:28:05.667 bw ( KiB/s): min=113664, max=262131, per=8.41%, avg=160361.15, stdev=55410.33, samples=20 00:28:05.667 iops : min= 444, max= 1023, avg=626.30, stdev=216.26, samples=20 00:28:05.667 lat (msec) : 50=1.50%, 100=42.44%, 250=56.06% 00:28:05.667 cpu : usr=0.25%, sys=2.67%, ctx=1415, majf=0, minf=4097 00:28:05.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:05.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.667 issued rwts: total=6333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.668 job10: (groupid=0, jobs=1): err= 0: pid=66663: Mon Jul 22 16:05:06 2024 00:28:05.668 read: IOPS=479, BW=120MiB/s (126MB/s)(1209MiB/10077msec) 00:28:05.668 slat (usec): min=18, max=80378, avg=2068.40, stdev=5422.27 00:28:05.668 clat (msec): min=26, max=261, avg=131.13, stdev=31.96 00:28:05.668 lat (msec): min=27, max=261, avg=133.20, stdev=32.57 00:28:05.668 clat percentiles (msec): 00:28:05.668 | 1.00th=[ 94], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 110], 00:28:05.668 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 123], 00:28:05.668 | 70.00th=[ 129], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 197], 00:28:05.668 | 99.00th=[ 220], 99.50th=[ 220], 99.90th=[ 245], 99.95th=[ 245], 00:28:05.668 | 99.99th=[ 262] 00:28:05.668 bw ( KiB/s): min=75414, max=145628, per=6.40%, avg=122035.95, stdev=26663.78, samples=20 00:28:05.668 iops : min= 294, max= 568, avg=476.45, stdev=104.13, samples=20 00:28:05.668 lat (msec) : 50=0.10%, 100=4.22%, 250=95.66%, 500=0.02% 00:28:05.668 cpu : usr=0.19%, sys=1.87%, ctx=1130, majf=0, minf=4097 00:28:05.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:05.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:05.668 issued rwts: total=4834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:05.668 00:28:05.668 Run status group 0 (all jobs): 00:28:05.668 READ: bw=1862MiB/s (1952MB/s), 112MiB/s-299MiB/s (117MB/s-314MB/s), io=18.4GiB (19.7GB), run=10013-10105msec 00:28:05.668 00:28:05.668 Disk stats (read/write): 00:28:05.668 nvme0n1: ios=24048/0, merge=0/0, ticks=1227463/0, in_queue=1227463, util=97.34% 00:28:05.668 nvme10n1: ios=9572/0, merge=0/0, ticks=1227815/0, in_queue=1227815, util=97.67% 00:28:05.668 nvme1n1: ios=9514/0, merge=0/0, ticks=1226126/0, in_queue=1226126, util=97.79% 00:28:05.668 nvme2n1: ios=21149/0, merge=0/0, ticks=1232134/0, in_queue=1232134, util=98.00% 00:28:05.668 nvme3n1: ios=8911/0, merge=0/0, ticks=1224079/0, in_queue=1224079, util=97.92% 00:28:05.668 nvme4n1: ios=12457/0, merge=0/0, ticks=1223471/0, in_queue=1223471, util=98.24% 00:28:05.668 nvme5n1: ios=9566/0, merge=0/0, ticks=1225888/0, in_queue=1225888, util=98.39% 00:28:05.668 nvme6n1: ios=22628/0, merge=0/0, ticks=1233216/0, in_queue=1233216, util=98.55% 00:28:05.668 nvme7n1: ios=8873/0, merge=0/0, ticks=1224291/0, in_queue=1224291, util=98.73% 00:28:05.668 nvme8n1: ios=12512/0, merge=0/0, ticks=1225507/0, in_queue=1225507, util=99.06% 00:28:05.668 nvme9n1: ios=9511/0, merge=0/0, ticks=1225825/0, in_queue=1225825, util=99.04% 00:28:05.668 16:05:06 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:05.668 [global] 00:28:05.668 thread=1 00:28:05.668 invalidate=1 00:28:05.668 rw=randwrite 00:28:05.668 time_based=1 00:28:05.668 runtime=10 00:28:05.668 ioengine=libaio 00:28:05.668 direct=1 00:28:05.668 bs=262144 00:28:05.668 iodepth=64 00:28:05.668 norandommap=1 00:28:05.668 numjobs=1 00:28:05.668 00:28:05.668 [job0] 00:28:05.668 filename=/dev/nvme0n1 00:28:05.668 [job1] 00:28:05.668 filename=/dev/nvme10n1 00:28:05.668 [job2] 00:28:05.668 filename=/dev/nvme1n1 00:28:05.668 [job3] 00:28:05.668 filename=/dev/nvme2n1 00:28:05.668 [job4] 00:28:05.668 filename=/dev/nvme3n1 00:28:05.668 [job5] 00:28:05.668 filename=/dev/nvme4n1 00:28:05.668 [job6] 00:28:05.668 filename=/dev/nvme5n1 00:28:05.668 [job7] 00:28:05.668 filename=/dev/nvme6n1 00:28:05.668 [job8] 00:28:05.668 filename=/dev/nvme7n1 00:28:05.668 [job9] 00:28:05.668 filename=/dev/nvme8n1 00:28:05.668 [job10] 00:28:05.668 filename=/dev/nvme9n1 00:28:05.668 Could not set queue depth (nvme0n1) 00:28:05.668 Could not set queue depth (nvme10n1) 00:28:05.668 Could not set queue depth (nvme1n1) 00:28:05.668 Could not set queue depth (nvme2n1) 00:28:05.668 Could not set queue depth (nvme3n1) 00:28:05.668 Could not set queue depth (nvme4n1) 00:28:05.668 Could not set queue depth (nvme5n1) 00:28:05.668 Could not set queue depth (nvme6n1) 00:28:05.668 Could not set queue depth (nvme7n1) 00:28:05.668 Could not set queue depth (nvme8n1) 00:28:05.668 Could not set queue depth (nvme9n1) 00:28:05.668 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:05.668 fio-3.35 00:28:05.668 Starting 11 threads 00:28:15.666 00:28:15.666 job0: (groupid=0, jobs=1): err= 0: pid=66857: Mon Jul 22 16:05:17 2024 00:28:15.666 write: IOPS=419, BW=105MiB/s (110MB/s)(1067MiB/10161msec); 0 zone resets 00:28:15.666 slat (usec): min=16, max=26231, avg=2339.61, stdev=4052.00 00:28:15.666 clat (msec): min=13, max=346, avg=149.92, stdev=21.65 00:28:15.666 lat (msec): min=13, max=346, avg=152.26, stdev=21.54 00:28:15.666 clat percentiles (msec): 00:28:15.666 | 1.00th=[ 68], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 142], 00:28:15.666 | 30.00th=[ 148], 40.00th=[ 148], 50.00th=[ 148], 60.00th=[ 148], 00:28:15.666 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 182], 00:28:15.666 | 99.00th=[ 209], 99.50th=[ 292], 99.90th=[ 334], 99.95th=[ 334], 00:28:15.666 | 99.99th=[ 347] 00:28:15.666 bw ( KiB/s): min=86528, max=114176, per=9.12%, avg=107622.40, stdev=6606.27, samples=20 00:28:15.666 iops : min= 338, max= 446, avg=420.40, stdev=25.81, samples=20 00:28:15.666 lat (msec) : 20=0.16%, 50=0.56%, 100=0.68%, 250=97.80%, 500=0.80% 00:28:15.666 cpu : usr=0.76%, sys=1.04%, ctx=5317, majf=0, minf=1 00:28:15.666 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:28:15.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.666 issued rwts: total=0,4267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.666 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.666 job1: (groupid=0, jobs=1): err= 0: pid=66858: Mon Jul 22 16:05:17 2024 00:28:15.666 write: IOPS=308, BW=77.1MiB/s (80.8MB/s)(784MiB/10163msec); 0 zone resets 00:28:15.666 slat (usec): min=19, max=19285, avg=3126.52, stdev=5506.77 00:28:15.666 clat (msec): min=14, max=335, avg=204.29, stdev=24.15 00:28:15.666 lat (msec): min=14, max=335, avg=207.42, stdev=23.99 00:28:15.666 clat percentiles (msec): 00:28:15.666 | 1.00th=[ 70], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 199], 00:28:15.666 | 30.00th=[ 203], 40.00th=[ 205], 50.00th=[ 211], 60.00th=[ 213], 00:28:15.666 | 70.00th=[ 213], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 220], 00:28:15.666 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 326], 99.95th=[ 334], 00:28:15.666 | 99.99th=[ 334] 00:28:15.666 bw ( KiB/s): min=73728, max=89088, per=6.66%, avg=78609.60, stdev=4088.43, samples=20 00:28:15.666 iops : min= 288, max= 348, avg=307.05, stdev=15.97, samples=20 00:28:15.667 lat (msec) : 20=0.10%, 50=0.48%, 100=0.99%, 250=97.48%, 500=0.96% 00:28:15.667 cpu : usr=0.52%, sys=0.93%, ctx=4088, majf=0, minf=1 00:28:15.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:28:15.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.667 issued rwts: total=0,3134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.667 job2: (groupid=0, jobs=1): err= 0: pid=66870: Mon Jul 22 16:05:17 2024 00:28:15.667 write: IOPS=305, BW=76.3MiB/s (80.0MB/s)(775MiB/10159msec); 0 zone resets 00:28:15.667 slat (usec): min=19, max=60844, avg=3207.56, stdev=5721.87 00:28:15.667 clat (msec): min=11, max=333, avg=206.44, stdev=29.50 00:28:15.667 lat (msec): min=11, max=333, avg=209.65, stdev=29.45 00:28:15.667 clat percentiles (msec): 00:28:15.667 | 1.00th=[ 32], 5.00th=[ 180], 10.00th=[ 194], 20.00th=[ 201], 00:28:15.667 | 30.00th=[ 207], 40.00th=[ 211], 50.00th=[ 213], 60.00th=[ 213], 00:28:15.667 | 70.00th=[ 215], 80.00th=[ 218], 90.00th=[ 224], 95.00th=[ 226], 00:28:15.667 | 99.00th=[ 251], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 334], 00:28:15.667 | 99.99th=[ 334] 00:28:15.667 bw ( KiB/s): min=71680, max=91648, per=6.59%, avg=77721.60, stdev=4620.26, samples=20 00:28:15.667 iops : min= 280, max= 358, avg=303.60, stdev=18.05, samples=20 00:28:15.667 lat (msec) : 20=0.48%, 50=1.29%, 100=0.26%, 250=96.97%, 500=1.00% 00:28:15.667 cpu : usr=0.47%, sys=1.09%, ctx=3897, majf=0, minf=1 00:28:15.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:28:15.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.667 issued rwts: total=0,3099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.667 job3: (groupid=0, jobs=1): err= 0: pid=66871: Mon Jul 22 16:05:17 2024 00:28:15.667 write: IOPS=303, BW=75.8MiB/s (79.4MB/s)(770MiB/10163msec); 0 zone resets 00:28:15.667 slat (usec): min=17, max=60169, avg=3242.82, stdev=5719.69 00:28:15.667 clat (msec): min=22, max=337, avg=207.84, stdev=22.13 00:28:15.667 lat (msec): min=22, max=337, avg=211.08, stdev=21.75 00:28:15.667 clat percentiles (msec): 00:28:15.667 | 1.00th=[ 99], 5.00th=[ 182], 10.00th=[ 197], 20.00th=[ 201], 00:28:15.667 | 30.00th=[ 205], 40.00th=[ 211], 50.00th=[ 211], 60.00th=[ 213], 00:28:15.667 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 220], 95.00th=[ 226], 00:28:15.667 | 99.00th=[ 253], 99.50th=[ 292], 99.90th=[ 326], 99.95th=[ 338], 00:28:15.667 | 99.99th=[ 338] 00:28:15.667 bw ( KiB/s): min=67449, max=88576, per=6.55%, avg=77228.45, stdev=3659.35, samples=20 00:28:15.667 iops : min= 263, max= 346, avg=301.65, stdev=14.36, samples=20 00:28:15.667 lat (msec) : 50=0.39%, 100=0.65%, 250=97.40%, 500=1.56% 00:28:15.667 cpu : usr=0.44%, sys=0.91%, ctx=3653, majf=0, minf=1 00:28:15.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:28:15.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.667 issued rwts: total=0,3080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.667 job4: (groupid=0, jobs=1): err= 0: pid=66872: Mon Jul 22 16:05:17 2024 00:28:15.667 write: IOPS=418, BW=105MiB/s (110MB/s)(1063MiB/10154msec); 0 zone resets 00:28:15.667 slat (usec): min=17, max=53831, avg=2347.46, stdev=4112.13 00:28:15.667 clat (msec): min=55, max=326, avg=150.39, stdev=16.75 00:28:15.667 lat (msec): min=55, max=327, avg=152.74, stdev=16.43 00:28:15.667 clat percentiles (msec): 00:28:15.667 | 1.00th=[ 132], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 144], 00:28:15.667 | 30.00th=[ 148], 40.00th=[ 148], 50.00th=[ 148], 60.00th=[ 148], 00:28:15.667 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 176], 00:28:15.667 | 99.00th=[ 209], 99.50th=[ 275], 99.90th=[ 317], 99.95th=[ 317], 00:28:15.667 | 99.99th=[ 326] 00:28:15.667 bw ( KiB/s): min=88576, max=112640, per=9.09%, avg=107252.95, stdev=6190.49, samples=20 00:28:15.667 iops : min= 346, max= 440, avg=418.95, stdev=24.18, samples=20 00:28:15.667 lat (msec) : 100=0.49%, 250=98.80%, 500=0.71% 00:28:15.667 cpu : usr=0.82%, sys=1.23%, ctx=4560, majf=0, minf=1 00:28:15.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:15.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.667 issued rwts: total=0,4253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.667 job5: (groupid=0, jobs=1): err= 0: pid=66873: Mon Jul 22 16:05:17 2024 00:28:15.667 write: IOPS=420, BW=105MiB/s (110MB/s)(1067MiB/10157msec); 0 zone resets 00:28:15.667 slat (usec): min=17, max=52856, avg=2338.89, stdev=4094.51 00:28:15.667 clat (msec): min=16, max=333, avg=149.96, stdev=19.96 00:28:15.667 lat (msec): min=16, max=333, avg=152.30, stdev=19.80 00:28:15.667 clat percentiles (msec): 00:28:15.667 | 1.00th=[ 71], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 144], 00:28:15.667 | 30.00th=[ 148], 40.00th=[ 148], 50.00th=[ 148], 60.00th=[ 148], 00:28:15.667 | 70.00th=[ 150], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 178], 00:28:15.667 | 99.00th=[ 211], 99.50th=[ 275], 99.90th=[ 321], 99.95th=[ 321], 00:28:15.667 | 99.99th=[ 334] 00:28:15.667 bw ( KiB/s): min=90624, max=112640, per=9.12%, avg=107596.80, stdev=6024.44, samples=20 00:28:15.667 iops : min= 354, max= 440, avg=420.30, stdev=23.53, samples=20 00:28:15.667 lat (msec) : 20=0.09%, 50=0.56%, 100=0.75%, 250=97.89%, 500=0.70% 00:28:15.667 cpu : usr=0.70%, sys=1.22%, ctx=4743, majf=0, minf=1 00:28:15.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:15.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.667 issued rwts: total=0,4266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.667 job6: (groupid=0, jobs=1): err= 0: pid=66874: Mon Jul 22 16:05:17 2024 00:28:15.667 write: IOPS=1113, BW=278MiB/s (292MB/s)(2802MiB/10063msec); 0 zone resets 00:28:15.667 slat (usec): min=15, max=20692, avg=872.02, stdev=1480.48 00:28:15.667 clat (msec): min=4, max=201, avg=56.58, stdev= 7.09 00:28:15.667 lat (msec): min=5, max=201, avg=57.45, stdev= 7.01 00:28:15.667 clat percentiles (msec): 00:28:15.667 | 1.00th=[ 40], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 54], 00:28:15.667 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 57], 00:28:15.667 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 61], 95.00th=[ 63], 00:28:15.667 | 99.00th=[ 70], 99.50th=[ 95], 99.90th=[ 163], 99.95th=[ 184], 00:28:15.667 | 99.99th=[ 199] 00:28:15.667 bw ( KiB/s): min=251392, max=294989, per=24.20%, avg=285458.50, stdev=10247.29, samples=20 00:28:15.667 iops : min= 982, max= 1152, avg=1114.95, stdev=39.98, samples=20 00:28:15.667 lat (msec) : 10=0.04%, 50=1.22%, 100=98.30%, 250=0.44% 00:28:15.667 cpu : usr=1.78%, sys=2.71%, ctx=14178, majf=0, minf=1 00:28:15.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:28:15.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.667 issued rwts: total=0,11207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.667 job7: (groupid=0, jobs=1): err= 0: pid=66875: Mon Jul 22 16:05:17 2024 00:28:15.667 write: IOPS=299, BW=74.9MiB/s (78.5MB/s)(761MiB/10168msec); 0 zone resets 00:28:15.667 slat (usec): min=18, max=43148, avg=3253.36, stdev=5877.72 00:28:15.667 clat (msec): min=6, max=337, avg=210.32, stdev=28.96 00:28:15.667 lat (msec): min=6, max=337, avg=213.57, stdev=28.92 00:28:15.667 clat percentiles (msec): 00:28:15.667 | 1.00th=[ 73], 5.00th=[ 174], 10.00th=[ 190], 20.00th=[ 203], 00:28:15.667 | 30.00th=[ 211], 40.00th=[ 213], 50.00th=[ 215], 60.00th=[ 218], 00:28:15.667 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 228], 95.00th=[ 232], 00:28:15.667 | 99.00th=[ 249], 99.50th=[ 292], 99.90th=[ 326], 99.95th=[ 338], 00:28:15.667 | 99.99th=[ 338] 00:28:15.667 bw ( KiB/s): min=71680, max=92160, per=6.47%, avg=76313.60, stdev=5386.51, samples=20 00:28:15.667 iops : min= 280, max= 360, avg=298.20, stdev=21.04, samples=20 00:28:15.667 lat (msec) : 10=0.16%, 20=0.13%, 50=0.30%, 100=1.58%, 250=96.85% 00:28:15.667 lat (msec) : 500=0.99% 00:28:15.667 cpu : usr=0.46%, sys=1.01%, ctx=3494, majf=0, minf=1 00:28:15.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:28:15.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.667 issued rwts: total=0,3045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.667 job8: (groupid=0, jobs=1): err= 0: pid=66876: Mon Jul 22 16:05:17 2024 00:28:15.667 write: IOPS=424, BW=106MiB/s (111MB/s)(1077MiB/10157msec); 0 zone resets 00:28:15.667 slat (usec): min=16, max=18876, avg=2296.72, stdev=3990.88 00:28:15.667 clat (msec): min=11, max=331, avg=148.49, stdev=20.93 00:28:15.667 lat (msec): min=11, max=331, avg=150.79, stdev=20.87 00:28:15.667 clat percentiles (msec): 00:28:15.667 | 1.00th=[ 52], 5.00th=[ 138], 10.00th=[ 138], 20.00th=[ 140], 00:28:15.667 | 30.00th=[ 146], 40.00th=[ 148], 50.00th=[ 148], 60.00th=[ 148], 00:28:15.667 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 176], 00:28:15.667 | 99.00th=[ 207], 99.50th=[ 271], 99.90th=[ 317], 99.95th=[ 317], 00:28:15.667 | 99.99th=[ 330] 00:28:15.667 bw ( KiB/s): min=90624, max=124416, per=9.21%, avg=108687.55, stdev=6517.91, samples=20 00:28:15.667 iops : min= 354, max= 486, avg=424.55, stdev=25.48, samples=20 00:28:15.667 lat (msec) : 20=0.23%, 50=0.74%, 100=0.65%, 250=97.77%, 500=0.60% 00:28:15.667 cpu : usr=0.87%, sys=1.16%, ctx=4340, majf=0, minf=1 00:28:15.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:28:15.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.667 issued rwts: total=0,4309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.667 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.668 job9: (groupid=0, jobs=1): err= 0: pid=66877: Mon Jul 22 16:05:17 2024 00:28:15.668 write: IOPS=306, BW=76.6MiB/s (80.3MB/s)(778MiB/10154msec); 0 zone resets 00:28:15.668 slat (usec): min=17, max=83462, avg=3117.53, stdev=5798.87 00:28:15.668 clat (msec): min=68, max=324, avg=205.56, stdev=24.69 00:28:15.668 lat (msec): min=68, max=324, avg=208.68, stdev=24.64 00:28:15.668 clat percentiles (msec): 00:28:15.668 | 1.00th=[ 86], 5.00th=[ 163], 10.00th=[ 176], 20.00th=[ 201], 00:28:15.668 | 30.00th=[ 205], 40.00th=[ 209], 50.00th=[ 211], 60.00th=[ 213], 00:28:15.668 | 70.00th=[ 215], 80.00th=[ 218], 90.00th=[ 220], 95.00th=[ 228], 00:28:15.668 | 99.00th=[ 259], 99.50th=[ 284], 99.90th=[ 313], 99.95th=[ 326], 00:28:15.668 | 99.99th=[ 326] 00:28:15.668 bw ( KiB/s): min=63615, max=103424, per=6.62%, avg=78060.75, stdev=7795.80, samples=20 00:28:15.668 iops : min= 248, max= 404, avg=304.90, stdev=30.50, samples=20 00:28:15.668 lat (msec) : 100=1.51%, 250=97.43%, 500=1.06% 00:28:15.668 cpu : usr=0.57%, sys=0.90%, ctx=1799, majf=0, minf=1 00:28:15.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:28:15.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.668 issued rwts: total=0,3112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.668 job10: (groupid=0, jobs=1): err= 0: pid=66878: Mon Jul 22 16:05:17 2024 00:28:15.668 write: IOPS=303, BW=75.8MiB/s (79.5MB/s)(771MiB/10162msec); 0 zone resets 00:28:15.668 slat (usec): min=19, max=41722, avg=3241.34, stdev=5615.56 00:28:15.668 clat (msec): min=19, max=333, avg=207.68, stdev=20.83 00:28:15.668 lat (msec): min=19, max=333, avg=210.92, stdev=20.44 00:28:15.668 clat percentiles (msec): 00:28:15.668 | 1.00th=[ 108], 5.00th=[ 182], 10.00th=[ 197], 20.00th=[ 201], 00:28:15.668 | 30.00th=[ 205], 40.00th=[ 211], 50.00th=[ 213], 60.00th=[ 213], 00:28:15.668 | 70.00th=[ 215], 80.00th=[ 215], 90.00th=[ 218], 95.00th=[ 222], 00:28:15.668 | 99.00th=[ 243], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 334], 00:28:15.668 | 99.99th=[ 334] 00:28:15.668 bw ( KiB/s): min=73728, max=88576, per=6.55%, avg=77285.80, stdev=3023.41, samples=20 00:28:15.668 iops : min= 288, max= 346, avg=301.85, stdev=11.82, samples=20 00:28:15.668 lat (msec) : 20=0.03%, 50=0.39%, 100=0.52%, 250=98.09%, 500=0.97% 00:28:15.668 cpu : usr=0.46%, sys=1.05%, ctx=5464, majf=0, minf=1 00:28:15.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:28:15.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:15.668 issued rwts: total=0,3082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:15.668 00:28:15.668 Run status group 0 (all jobs): 00:28:15.668 WRITE: bw=1152MiB/s (1208MB/s), 74.9MiB/s-278MiB/s (78.5MB/s-292MB/s), io=11.4GiB (12.3GB), run=10063-10168msec 00:28:15.668 00:28:15.668 Disk stats (read/write): 00:28:15.668 nvme0n1: ios=49/8401, merge=0/0, ticks=39/1209346, in_queue=1209385, util=97.79% 00:28:15.668 nvme10n1: ios=49/6133, merge=0/0, ticks=46/1212227, in_queue=1212273, util=97.93% 00:28:15.668 nvme1n1: ios=46/6063, merge=0/0, ticks=64/1210149, in_queue=1210213, util=98.06% 00:28:15.668 nvme2n1: ios=28/6033, merge=0/0, ticks=72/1212059, in_queue=1212131, util=98.17% 00:28:15.668 nvme3n1: ios=22/8370, merge=0/0, ticks=33/1210542, in_queue=1210575, util=98.01% 00:28:15.668 nvme4n1: ios=0/8402, merge=0/0, ticks=0/1211756, in_queue=1211756, util=98.29% 00:28:15.668 nvme5n1: ios=0/22297, merge=0/0, ticks=0/1220668, in_queue=1220668, util=98.48% 00:28:15.668 nvme6n1: ios=0/5960, merge=0/0, ticks=0/1212055, in_queue=1212055, util=98.53% 00:28:15.668 nvme7n1: ios=0/8485, merge=0/0, ticks=0/1211869, in_queue=1211869, util=98.66% 00:28:15.668 nvme8n1: ios=0/6090, merge=0/0, ticks=0/1213160, in_queue=1213160, util=98.68% 00:28:15.668 nvme9n1: ios=0/6028, merge=0/0, ticks=0/1210543, in_queue=1210543, util=98.80% 00:28:15.668 16:05:17 -- target/multiconnection.sh@36 -- # sync 00:28:15.668 16:05:17 -- target/multiconnection.sh@37 -- # seq 1 11 00:28:15.668 16:05:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.668 16:05:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:15.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:15.668 16:05:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:15.668 16:05:17 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:28:15.668 16:05:17 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.668 16:05:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.668 16:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.668 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:28:15.668 16:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.668 16:05:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.668 16:05:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:15.668 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:15.668 16:05:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:15.668 16:05:17 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:28:15.668 16:05:17 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.668 16:05:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:15.668 16:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.668 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:28:15.668 16:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.668 16:05:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.668 16:05:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:15.668 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:15.668 16:05:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:15.668 16:05:17 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:28:15.668 16:05:17 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.668 16:05:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:15.668 16:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.668 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:28:15.668 16:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.668 16:05:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.668 16:05:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:15.668 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:15.668 16:05:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:15.668 16:05:17 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:28:15.668 16:05:17 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.668 16:05:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:15.668 16:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.668 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:28:15.668 16:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.668 16:05:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.668 16:05:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:15.668 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:15.668 16:05:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:15.668 16:05:17 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:28:15.668 16:05:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:28:15.668 16:05:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.668 16:05:17 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.668 16:05:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:15.668 16:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.668 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:28:15.668 16:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.668 16:05:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.668 16:05:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:15.668 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:15.668 16:05:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:15.668 16:05:18 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.668 16:05:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:28:15.668 16:05:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.668 16:05:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.668 16:05:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:28:15.668 16:05:18 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.668 16:05:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:15.668 16:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.668 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:28:15.668 16:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.668 16:05:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.668 16:05:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:15.668 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:15.669 16:05:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:15.669 16:05:18 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:28:15.669 16:05:18 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.669 16:05:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:15.669 16:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.669 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:28:15.669 16:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.669 16:05:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.669 16:05:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:15.669 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:15.669 16:05:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:15.669 16:05:18 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:28:15.669 16:05:18 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.669 16:05:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:15.669 16:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.669 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:28:15.669 16:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.669 16:05:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.669 16:05:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:15.669 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:15.669 16:05:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:15.669 16:05:18 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:28:15.669 16:05:18 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.669 16:05:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:15.669 16:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.669 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:28:15.669 16:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.669 16:05:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.669 16:05:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:15.669 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:15.669 16:05:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:15.669 16:05:18 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.669 16:05:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:15.669 16:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.669 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:28:15.669 16:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.669 16:05:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:15.669 16:05:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:15.669 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:15.669 16:05:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:15.669 16:05:18 -- common/autotest_common.sh@1198 -- # local i=0 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:15.669 16:05:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:28:15.669 16:05:18 -- common/autotest_common.sh@1210 -- # return 0 00:28:15.669 16:05:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:15.669 16:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.669 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:28:15.669 16:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.669 16:05:18 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:15.669 16:05:18 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:15.669 16:05:18 -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:15.669 16:05:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:15.669 16:05:18 -- nvmf/common.sh@116 -- # sync 00:28:15.669 16:05:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:15.669 16:05:18 -- nvmf/common.sh@119 -- # set +e 00:28:15.669 16:05:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:15.669 16:05:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:15.669 rmmod nvme_tcp 00:28:15.669 rmmod nvme_fabrics 00:28:15.669 rmmod nvme_keyring 00:28:15.927 16:05:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:15.927 16:05:18 -- nvmf/common.sh@123 -- # set -e 00:28:15.927 16:05:18 -- nvmf/common.sh@124 -- # return 0 00:28:15.927 16:05:18 -- nvmf/common.sh@477 -- # '[' -n 66195 ']' 00:28:15.927 16:05:18 -- nvmf/common.sh@478 -- # killprocess 66195 00:28:15.927 16:05:18 -- common/autotest_common.sh@926 -- # '[' -z 66195 ']' 00:28:15.927 16:05:18 -- common/autotest_common.sh@930 -- # kill -0 66195 00:28:15.927 16:05:18 -- common/autotest_common.sh@931 -- # uname 00:28:15.927 16:05:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:15.927 16:05:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66195 00:28:15.927 killing process with pid 66195 00:28:15.927 16:05:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:15.927 16:05:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:15.927 16:05:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66195' 00:28:15.927 16:05:18 -- common/autotest_common.sh@945 -- # kill 66195 00:28:15.927 16:05:18 -- common/autotest_common.sh@950 -- # wait 66195 00:28:16.185 16:05:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:16.185 16:05:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:16.185 16:05:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:16.185 16:05:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.185 16:05:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:16.185 16:05:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.185 16:05:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.185 16:05:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.185 16:05:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:28:16.185 ************************************ 00:28:16.185 END TEST nvmf_multiconnection 00:28:16.185 ************************************ 00:28:16.185 00:28:16.185 real 0m48.626s 00:28:16.185 user 2m42.024s 00:28:16.185 sys 0m32.277s 00:28:16.185 16:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.185 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:28:16.185 16:05:18 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:16.185 16:05:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:16.185 16:05:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:16.185 16:05:18 -- common/autotest_common.sh@10 -- # set +x 00:28:16.185 ************************************ 00:28:16.185 START TEST nvmf_initiator_timeout 00:28:16.185 ************************************ 00:28:16.185 16:05:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:16.443 * Looking for test storage... 00:28:16.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:16.443 16:05:19 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:16.443 16:05:19 -- nvmf/common.sh@7 -- # uname -s 00:28:16.443 16:05:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.443 16:05:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.443 16:05:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.443 16:05:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.443 16:05:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.443 16:05:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.443 16:05:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.443 16:05:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.443 16:05:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.443 16:05:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.443 16:05:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:28:16.443 16:05:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:28:16.443 16:05:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.443 16:05:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.443 16:05:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:16.443 16:05:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:16.443 16:05:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.444 16:05:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.444 16:05:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.444 16:05:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.444 16:05:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.444 16:05:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.444 16:05:19 -- paths/export.sh@5 -- # export PATH 00:28:16.444 16:05:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.444 16:05:19 -- nvmf/common.sh@46 -- # : 0 00:28:16.444 16:05:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:16.444 16:05:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:16.444 16:05:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:16.444 16:05:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.444 16:05:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.444 16:05:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:16.444 16:05:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:16.444 16:05:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:16.444 16:05:19 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:16.444 16:05:19 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:16.444 16:05:19 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:16.444 16:05:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:16.444 16:05:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.444 16:05:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:16.444 16:05:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:16.444 16:05:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:16.444 16:05:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.444 16:05:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.444 16:05:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.444 16:05:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:28:16.444 16:05:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:28:16.444 16:05:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:28:16.444 16:05:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:28:16.444 16:05:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:28:16.444 16:05:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:28:16.444 16:05:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.444 16:05:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.444 16:05:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:16.444 16:05:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:28:16.444 16:05:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:16.444 16:05:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:16.444 16:05:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:16.444 16:05:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.444 16:05:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:16.444 16:05:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:16.444 16:05:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:16.444 16:05:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:16.444 16:05:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:28:16.444 16:05:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:28:16.444 Cannot find device "nvmf_tgt_br" 00:28:16.444 16:05:19 -- nvmf/common.sh@154 -- # true 00:28:16.444 16:05:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:28:16.444 Cannot find device "nvmf_tgt_br2" 00:28:16.444 16:05:19 -- nvmf/common.sh@155 -- # true 00:28:16.444 16:05:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:28:16.444 16:05:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:28:16.444 Cannot find device "nvmf_tgt_br" 00:28:16.444 16:05:19 -- nvmf/common.sh@157 -- # true 00:28:16.444 16:05:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:28:16.444 Cannot find device "nvmf_tgt_br2" 00:28:16.444 16:05:19 -- nvmf/common.sh@158 -- # true 00:28:16.444 16:05:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:28:16.444 16:05:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:28:16.444 16:05:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:16.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:16.444 16:05:19 -- nvmf/common.sh@161 -- # true 00:28:16.444 16:05:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:16.444 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:16.444 16:05:19 -- nvmf/common.sh@162 -- # true 00:28:16.444 16:05:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:28:16.444 16:05:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:16.444 16:05:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:16.444 16:05:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:16.444 16:05:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:16.444 16:05:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:16.444 16:05:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:16.444 16:05:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:16.444 16:05:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:16.444 16:05:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:28:16.444 16:05:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:28:16.733 16:05:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:28:16.733 16:05:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:28:16.733 16:05:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:16.733 16:05:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:16.733 16:05:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:16.733 16:05:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:28:16.733 16:05:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:28:16.733 16:05:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:28:16.733 16:05:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:16.733 16:05:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:16.733 16:05:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:16.733 16:05:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:16.733 16:05:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:28:16.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:28:16.733 00:28:16.733 --- 10.0.0.2 ping statistics --- 00:28:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.733 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:28:16.733 16:05:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:28:16.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:16.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:28:16.733 00:28:16.733 --- 10.0.0.3 ping statistics --- 00:28:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.733 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:28:16.733 16:05:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:16.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:28:16.733 00:28:16.733 --- 10.0.0.1 ping statistics --- 00:28:16.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.733 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:28:16.733 16:05:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.733 16:05:19 -- nvmf/common.sh@421 -- # return 0 00:28:16.733 16:05:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:16.733 16:05:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.733 16:05:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:16.733 16:05:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:16.733 16:05:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.733 16:05:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:16.733 16:05:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:16.733 16:05:19 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:16.733 16:05:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:16.733 16:05:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:16.733 16:05:19 -- common/autotest_common.sh@10 -- # set +x 00:28:16.733 16:05:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:16.733 16:05:19 -- nvmf/common.sh@469 -- # nvmfpid=67249 00:28:16.733 16:05:19 -- nvmf/common.sh@470 -- # waitforlisten 67249 00:28:16.733 16:05:19 -- common/autotest_common.sh@819 -- # '[' -z 67249 ']' 00:28:16.733 16:05:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.734 16:05:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:16.734 16:05:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.734 16:05:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:16.734 16:05:19 -- common/autotest_common.sh@10 -- # set +x 00:28:16.734 [2024-07-22 16:05:19.496011] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:16.734 [2024-07-22 16:05:19.496113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.991 [2024-07-22 16:05:19.637188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.991 [2024-07-22 16:05:19.708169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:16.991 [2024-07-22 16:05:19.708347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.991 [2024-07-22 16:05:19.708366] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.991 [2024-07-22 16:05:19.708377] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.991 [2024-07-22 16:05:19.708532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.991 [2024-07-22 16:05:19.709005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:16.991 [2024-07-22 16:05:19.709183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.991 [2024-07-22 16:05:19.709192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.927 16:05:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:17.927 16:05:20 -- common/autotest_common.sh@852 -- # return 0 00:28:17.927 16:05:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:17.927 16:05:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:17.927 16:05:20 -- common/autotest_common.sh@10 -- # set +x 00:28:17.927 16:05:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.927 16:05:20 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:17.927 16:05:20 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:17.927 16:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.927 16:05:20 -- common/autotest_common.sh@10 -- # set +x 00:28:17.927 Malloc0 00:28:17.927 16:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.927 16:05:20 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:17.927 16:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.927 16:05:20 -- common/autotest_common.sh@10 -- # set +x 00:28:17.927 Delay0 00:28:17.927 16:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.927 16:05:20 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:17.927 16:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.927 16:05:20 -- common/autotest_common.sh@10 -- # set +x 00:28:17.927 [2024-07-22 16:05:20.620933] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.927 16:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.927 16:05:20 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:17.927 16:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.927 16:05:20 -- common/autotest_common.sh@10 -- # set +x 00:28:17.927 16:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.927 16:05:20 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.927 16:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.927 16:05:20 -- common/autotest_common.sh@10 -- # set +x 00:28:17.927 16:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.927 16:05:20 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.927 16:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.928 16:05:20 -- common/autotest_common.sh@10 -- # set +x 00:28:17.928 [2024-07-22 16:05:20.653117] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.928 16:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.928 16:05:20 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:17.928 16:05:20 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:17.928 16:05:20 -- common/autotest_common.sh@1177 -- # local i=0 00:28:17.928 16:05:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:28:17.928 16:05:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:28:17.928 16:05:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:28:20.463 16:05:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:28:20.463 16:05:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:28:20.463 16:05:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:28:20.463 16:05:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:28:20.463 16:05:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:28:20.463 16:05:22 -- common/autotest_common.sh@1187 -- # return 0 00:28:20.463 16:05:22 -- target/initiator_timeout.sh@35 -- # fio_pid=67313 00:28:20.463 16:05:22 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:20.463 16:05:22 -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:20.463 [global] 00:28:20.463 thread=1 00:28:20.463 invalidate=1 00:28:20.463 rw=write 00:28:20.463 time_based=1 00:28:20.463 runtime=60 00:28:20.463 ioengine=libaio 00:28:20.463 direct=1 00:28:20.463 bs=4096 00:28:20.463 iodepth=1 00:28:20.463 norandommap=0 00:28:20.463 numjobs=1 00:28:20.463 00:28:20.463 verify_dump=1 00:28:20.463 verify_backlog=512 00:28:20.463 verify_state_save=0 00:28:20.463 do_verify=1 00:28:20.463 verify=crc32c-intel 00:28:20.463 [job0] 00:28:20.463 filename=/dev/nvme0n1 00:28:20.463 Could not set queue depth (nvme0n1) 00:28:20.463 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:20.463 fio-3.35 00:28:20.463 Starting 1 thread 00:28:23.034 16:05:25 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:23.034 16:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.034 16:05:25 -- common/autotest_common.sh@10 -- # set +x 00:28:23.034 true 00:28:23.034 16:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.034 16:05:25 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:23.034 16:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.034 16:05:25 -- common/autotest_common.sh@10 -- # set +x 00:28:23.034 true 00:28:23.034 16:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.034 16:05:25 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:23.034 16:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.034 16:05:25 -- common/autotest_common.sh@10 -- # set +x 00:28:23.034 true 00:28:23.034 16:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.034 16:05:25 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:23.034 16:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.034 16:05:25 -- common/autotest_common.sh@10 -- # set +x 00:28:23.034 true 00:28:23.034 16:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.034 16:05:25 -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:26.319 16:05:28 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:26.319 16:05:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.319 16:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:26.319 true 00:28:26.319 16:05:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.319 16:05:28 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:26.319 16:05:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.319 16:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:26.319 true 00:28:26.319 16:05:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.319 16:05:28 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:26.319 16:05:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.319 16:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:26.319 true 00:28:26.319 16:05:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.319 16:05:28 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:26.319 16:05:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.319 16:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:26.319 true 00:28:26.319 16:05:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.319 16:05:28 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:26.319 16:05:28 -- target/initiator_timeout.sh@54 -- # wait 67313 00:29:22.601 00:29:22.601 job0: (groupid=0, jobs=1): err= 0: pid=67334: Mon Jul 22 16:06:23 2024 00:29:22.601 read: IOPS=702, BW=2812KiB/s (2879kB/s)(165MiB/60000msec) 00:29:22.601 slat (usec): min=13, max=16031, avg=20.94, stdev=98.59 00:29:22.601 clat (usec): min=135, max=40576k, avg=1190.53, stdev=197580.03 00:29:22.601 lat (usec): min=183, max=40576k, avg=1211.46, stdev=197580.05 00:29:22.601 clat percentiles (usec): 00:29:22.601 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:29:22.601 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:29:22.601 | 70.00th=[ 235], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 293], 00:29:22.601 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 441], 99.95th=[ 490], 00:29:22.601 | 99.99th=[ 1303] 00:29:22.601 write: IOPS=708, BW=2833KiB/s (2901kB/s)(166MiB/60000msec); 0 zone resets 00:29:22.601 slat (usec): min=15, max=651, avg=28.93, stdev=10.74 00:29:22.601 clat (usec): min=100, max=2583, avg=175.72, stdev=29.00 00:29:22.601 lat (usec): min=149, max=2627, avg=204.66, stdev=32.88 00:29:22.601 clat percentiles (usec): 00:29:22.601 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:29:22.601 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:29:22.601 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 223], 00:29:22.601 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 318], 99.95th=[ 343], 00:29:22.601 | 99.99th=[ 445] 00:29:22.601 bw ( KiB/s): min= 1072, max=11008, per=100.00%, avg=8505.18, stdev=1686.72, samples=39 00:29:22.601 iops : min= 268, max= 2752, avg=2126.33, stdev=421.69, samples=39 00:29:22.601 lat (usec) : 250=89.87%, 500=10.10%, 750=0.01%, 1000=0.01% 00:29:22.601 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:29:22.601 cpu : usr=0.76%, sys=2.72%, ctx=84686, majf=0, minf=2 00:29:22.601 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:22.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.601 issued rwts: total=42174,42496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.601 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:22.601 00:29:22.601 Run status group 0 (all jobs): 00:29:22.601 READ: bw=2812KiB/s (2879kB/s), 2812KiB/s-2812KiB/s (2879kB/s-2879kB/s), io=165MiB (173MB), run=60000-60000msec 00:29:22.601 WRITE: bw=2833KiB/s (2901kB/s), 2833KiB/s-2833KiB/s (2901kB/s-2901kB/s), io=166MiB (174MB), run=60000-60000msec 00:29:22.601 00:29:22.601 Disk stats (read/write): 00:29:22.601 nvme0n1: ios=42188/42241, merge=0/0, ticks=9925/7880, in_queue=17805, util=99.55% 00:29:22.601 16:06:23 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:22.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:22.601 16:06:23 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:22.601 16:06:23 -- common/autotest_common.sh@1198 -- # local i=0 00:29:22.601 16:06:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:29:22.601 16:06:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:22.601 16:06:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:29:22.601 16:06:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:22.601 16:06:23 -- common/autotest_common.sh@1210 -- # return 0 00:29:22.601 16:06:23 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:22.601 16:06:23 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:22.601 nvmf hotplug test: fio successful as expected 00:29:22.601 16:06:23 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:22.601 16:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.601 16:06:23 -- common/autotest_common.sh@10 -- # set +x 00:29:22.601 16:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.601 16:06:23 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:22.601 16:06:23 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:22.602 16:06:23 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:22.602 16:06:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:22.602 16:06:23 -- nvmf/common.sh@116 -- # sync 00:29:22.602 16:06:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:22.602 16:06:23 -- nvmf/common.sh@119 -- # set +e 00:29:22.602 16:06:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:22.602 16:06:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:22.602 rmmod nvme_tcp 00:29:22.602 rmmod nvme_fabrics 00:29:22.602 rmmod nvme_keyring 00:29:22.602 16:06:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:22.602 16:06:23 -- nvmf/common.sh@123 -- # set -e 00:29:22.602 16:06:23 -- nvmf/common.sh@124 -- # return 0 00:29:22.602 16:06:23 -- nvmf/common.sh@477 -- # '[' -n 67249 ']' 00:29:22.602 16:06:23 -- nvmf/common.sh@478 -- # killprocess 67249 00:29:22.602 16:06:23 -- common/autotest_common.sh@926 -- # '[' -z 67249 ']' 00:29:22.602 16:06:23 -- common/autotest_common.sh@930 -- # kill -0 67249 00:29:22.602 16:06:23 -- common/autotest_common.sh@931 -- # uname 00:29:22.602 16:06:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:22.602 16:06:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67249 00:29:22.602 16:06:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:22.602 16:06:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:22.602 killing process with pid 67249 00:29:22.602 16:06:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67249' 00:29:22.602 16:06:23 -- common/autotest_common.sh@945 -- # kill 67249 00:29:22.602 16:06:23 -- common/autotest_common.sh@950 -- # wait 67249 00:29:22.602 16:06:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:22.602 16:06:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:22.602 16:06:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:22.602 16:06:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.602 16:06:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:22.602 16:06:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.602 16:06:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.602 16:06:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.602 16:06:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:29:22.602 00:29:22.602 real 1m4.495s 00:29:22.602 user 3m52.763s 00:29:22.602 sys 0m22.524s 00:29:22.602 16:06:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.602 16:06:23 -- common/autotest_common.sh@10 -- # set +x 00:29:22.602 ************************************ 00:29:22.602 END TEST nvmf_initiator_timeout 00:29:22.602 ************************************ 00:29:22.602 16:06:23 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:29:22.602 16:06:23 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:29:22.602 16:06:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:22.602 16:06:23 -- common/autotest_common.sh@10 -- # set +x 00:29:22.602 16:06:23 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:29:22.602 16:06:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:22.602 16:06:23 -- common/autotest_common.sh@10 -- # set +x 00:29:22.602 16:06:23 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:29:22.602 16:06:23 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:22.602 16:06:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:22.602 16:06:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:22.602 16:06:23 -- common/autotest_common.sh@10 -- # set +x 00:29:22.602 ************************************ 00:29:22.602 START TEST nvmf_identify 00:29:22.602 ************************************ 00:29:22.602 16:06:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:22.602 * Looking for test storage... 00:29:22.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:22.602 16:06:23 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:22.602 16:06:23 -- nvmf/common.sh@7 -- # uname -s 00:29:22.602 16:06:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.602 16:06:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.602 16:06:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.602 16:06:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.602 16:06:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.602 16:06:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.602 16:06:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.602 16:06:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.602 16:06:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.602 16:06:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.602 16:06:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:29:22.602 16:06:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:29:22.602 16:06:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.602 16:06:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.602 16:06:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:22.602 16:06:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:22.602 16:06:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.602 16:06:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.602 16:06:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.602 16:06:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.602 16:06:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.602 16:06:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.602 16:06:23 -- paths/export.sh@5 -- # export PATH 00:29:22.602 16:06:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.602 16:06:23 -- nvmf/common.sh@46 -- # : 0 00:29:22.602 16:06:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:22.602 16:06:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:22.602 16:06:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:22.602 16:06:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.602 16:06:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.602 16:06:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:22.602 16:06:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:22.602 16:06:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:22.602 16:06:23 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:22.602 16:06:23 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:22.602 16:06:23 -- host/identify.sh@14 -- # nvmftestinit 00:29:22.602 16:06:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:22.602 16:06:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.602 16:06:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:22.602 16:06:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:22.602 16:06:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:22.602 16:06:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.602 16:06:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.602 16:06:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.602 16:06:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:29:22.602 16:06:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:29:22.602 16:06:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:29:22.602 16:06:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:29:22.602 16:06:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:29:22.602 16:06:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:29:22.602 16:06:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.602 16:06:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.602 16:06:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:22.602 16:06:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:29:22.602 16:06:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:22.602 16:06:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:22.602 16:06:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:22.602 16:06:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.602 16:06:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:22.602 16:06:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:22.602 16:06:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:22.602 16:06:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:22.602 16:06:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:29:22.602 16:06:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:29:22.602 Cannot find device "nvmf_tgt_br" 00:29:22.602 16:06:23 -- nvmf/common.sh@154 -- # true 00:29:22.602 16:06:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:29:22.602 Cannot find device "nvmf_tgt_br2" 00:29:22.602 16:06:23 -- nvmf/common.sh@155 -- # true 00:29:22.602 16:06:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:29:22.602 16:06:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:29:22.602 Cannot find device "nvmf_tgt_br" 00:29:22.602 16:06:23 -- nvmf/common.sh@157 -- # true 00:29:22.602 16:06:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:29:22.602 Cannot find device "nvmf_tgt_br2" 00:29:22.602 16:06:23 -- nvmf/common.sh@158 -- # true 00:29:22.602 16:06:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:29:22.602 16:06:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:29:22.603 16:06:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:22.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:22.603 16:06:23 -- nvmf/common.sh@161 -- # true 00:29:22.603 16:06:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:22.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:22.603 16:06:23 -- nvmf/common.sh@162 -- # true 00:29:22.603 16:06:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:29:22.603 16:06:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:22.603 16:06:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:22.603 16:06:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:22.603 16:06:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:22.603 16:06:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:22.603 16:06:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:22.603 16:06:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:22.603 16:06:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:22.603 16:06:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:29:22.603 16:06:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:29:22.603 16:06:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:29:22.603 16:06:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:29:22.603 16:06:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:22.603 16:06:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:22.603 16:06:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:22.603 16:06:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:29:22.603 16:06:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:29:22.603 16:06:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:29:22.603 16:06:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:22.603 16:06:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:22.603 16:06:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:22.603 16:06:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:22.603 16:06:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:29:22.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:29:22.603 00:29:22.603 --- 10.0.0.2 ping statistics --- 00:29:22.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.603 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:29:22.603 16:06:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:29:22.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:22.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:29:22.603 00:29:22.603 --- 10.0.0.3 ping statistics --- 00:29:22.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.603 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:29:22.603 16:06:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:22.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:29:22.603 00:29:22.603 --- 10.0.0.1 ping statistics --- 00:29:22.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.603 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:29:22.603 16:06:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.603 16:06:23 -- nvmf/common.sh@421 -- # return 0 00:29:22.603 16:06:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:22.603 16:06:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.603 16:06:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:22.603 16:06:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:22.603 16:06:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.603 16:06:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:22.603 16:06:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:22.603 16:06:24 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:22.603 16:06:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:22.603 16:06:24 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 16:06:24 -- host/identify.sh@19 -- # nvmfpid=68171 00:29:22.603 16:06:24 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:22.603 16:06:24 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:22.603 16:06:24 -- host/identify.sh@23 -- # waitforlisten 68171 00:29:22.603 16:06:24 -- common/autotest_common.sh@819 -- # '[' -z 68171 ']' 00:29:22.603 16:06:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.603 16:06:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:22.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.603 16:06:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.603 16:06:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:22.603 16:06:24 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 [2024-07-22 16:06:24.068307] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:22.603 [2024-07-22 16:06:24.068394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.603 [2024-07-22 16:06:24.206021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:22.603 [2024-07-22 16:06:24.278249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:22.603 [2024-07-22 16:06:24.278425] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.603 [2024-07-22 16:06:24.278442] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.603 [2024-07-22 16:06:24.278452] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.603 [2024-07-22 16:06:24.278567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.603 [2024-07-22 16:06:24.278942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.603 [2024-07-22 16:06:24.279010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.603 [2024-07-22 16:06:24.279021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.603 16:06:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:22.603 16:06:25 -- common/autotest_common.sh@852 -- # return 0 00:29:22.603 16:06:25 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.603 16:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.603 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 [2024-07-22 16:06:25.166632] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.603 16:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.603 16:06:25 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:22.603 16:06:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:22.603 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 16:06:25 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:22.603 16:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.603 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 Malloc0 00:29:22.603 16:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.603 16:06:25 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.603 16:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.603 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 16:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.603 16:06:25 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:22.603 16:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.603 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 16:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.603 16:06:25 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.603 16:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.603 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 [2024-07-22 16:06:25.269935] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.603 16:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.603 16:06:25 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:22.603 16:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.603 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 16:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.603 16:06:25 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:22.603 16:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.603 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 [2024-07-22 16:06:25.293680] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:22.603 [ 00:29:22.603 { 00:29:22.603 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:22.603 "subtype": "Discovery", 00:29:22.603 "listen_addresses": [ 00:29:22.603 { 00:29:22.603 "transport": "TCP", 00:29:22.603 "trtype": "TCP", 00:29:22.603 "adrfam": "IPv4", 00:29:22.603 "traddr": "10.0.0.2", 00:29:22.603 "trsvcid": "4420" 00:29:22.603 } 00:29:22.603 ], 00:29:22.603 "allow_any_host": true, 00:29:22.603 "hosts": [] 00:29:22.603 }, 00:29:22.603 { 00:29:22.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.603 "subtype": "NVMe", 00:29:22.603 "listen_addresses": [ 00:29:22.603 { 00:29:22.603 "transport": "TCP", 00:29:22.603 "trtype": "TCP", 00:29:22.603 "adrfam": "IPv4", 00:29:22.603 "traddr": "10.0.0.2", 00:29:22.603 "trsvcid": "4420" 00:29:22.603 } 00:29:22.603 ], 00:29:22.603 "allow_any_host": true, 00:29:22.603 "hosts": [], 00:29:22.603 "serial_number": "SPDK00000000000001", 00:29:22.603 "model_number": "SPDK bdev Controller", 00:29:22.603 "max_namespaces": 32, 00:29:22.603 "min_cntlid": 1, 00:29:22.603 "max_cntlid": 65519, 00:29:22.603 "namespaces": [ 00:29:22.603 { 00:29:22.603 "nsid": 1, 00:29:22.603 "bdev_name": "Malloc0", 00:29:22.603 "name": "Malloc0", 00:29:22.603 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:22.603 "eui64": "ABCDEF0123456789", 00:29:22.604 "uuid": "8b05f0bd-514a-4543-8560-e4dabf708016" 00:29:22.604 } 00:29:22.604 ] 00:29:22.604 } 00:29:22.604 ] 00:29:22.604 16:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.604 16:06:25 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:22.604 [2024-07-22 16:06:25.337895] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:22.604 [2024-07-22 16:06:25.337954] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68206 ] 00:29:22.867 [2024-07-22 16:06:25.488085] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:22.867 [2024-07-22 16:06:25.488188] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:22.867 [2024-07-22 16:06:25.488199] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:22.867 [2024-07-22 16:06:25.488215] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:22.867 [2024-07-22 16:06:25.488233] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:29:22.867 [2024-07-22 16:06:25.488399] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:22.867 [2024-07-22 16:06:25.488463] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d2e270 0 00:29:22.867 [2024-07-22 16:06:25.493530] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:22.867 [2024-07-22 16:06:25.493563] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:22.867 [2024-07-22 16:06:25.493572] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:22.867 [2024-07-22 16:06:25.493592] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:22.867 [2024-07-22 16:06:25.493655] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.867 [2024-07-22 16:06:25.493665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.867 [2024-07-22 16:06:25.493670] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.867 [2024-07-22 16:06:25.493687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:22.867 [2024-07-22 16:06:25.493728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.867 [2024-07-22 16:06:25.501515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.867 [2024-07-22 16:06:25.501544] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.867 [2024-07-22 16:06:25.501552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.867 [2024-07-22 16:06:25.501558] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d6d0) on tqpair=0x1d2e270 00:29:22.867 [2024-07-22 16:06:25.501574] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:22.867 [2024-07-22 16:06:25.501584] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:22.867 [2024-07-22 16:06:25.501592] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:22.867 [2024-07-22 16:06:25.501619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.867 [2024-07-22 16:06:25.501627] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.867 [2024-07-22 16:06:25.501633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.867 [2024-07-22 16:06:25.501645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.867 [2024-07-22 16:06:25.501681] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.867 [2024-07-22 16:06:25.501783] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.867 [2024-07-22 16:06:25.501792] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.867 [2024-07-22 16:06:25.501797] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.867 [2024-07-22 16:06:25.501803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d6d0) on tqpair=0x1d2e270 00:29:22.867 [2024-07-22 16:06:25.501812] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:22.867 [2024-07-22 16:06:25.501822] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:22.867 [2024-07-22 16:06:25.501832] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.867 [2024-07-22 16:06:25.501838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.867 [2024-07-22 16:06:25.501843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.867 [2024-07-22 16:06:25.501853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.867 [2024-07-22 16:06:25.501877] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.867 [2024-07-22 16:06:25.501968] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.867 [2024-07-22 16:06:25.501976] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.867 [2024-07-22 16:06:25.501981] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.867 [2024-07-22 16:06:25.501987] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d6d0) on tqpair=0x1d2e270 00:29:22.867 [2024-07-22 16:06:25.501996] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:22.867 [2024-07-22 16:06:25.502007] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:22.867 [2024-07-22 16:06:25.502016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502021] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502026] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.502036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.868 [2024-07-22 16:06:25.502058] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.868 [2024-07-22 16:06:25.502139] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.868 [2024-07-22 16:06:25.502147] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.868 [2024-07-22 16:06:25.502152] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502157] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d6d0) on tqpair=0x1d2e270 00:29:22.868 [2024-07-22 16:06:25.502166] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:22.868 [2024-07-22 16:06:25.502178] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502190] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.502199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.868 [2024-07-22 16:06:25.502220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.868 [2024-07-22 16:06:25.502302] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.868 [2024-07-22 16:06:25.502310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.868 [2024-07-22 16:06:25.502315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502320] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d6d0) on tqpair=0x1d2e270 00:29:22.868 [2024-07-22 16:06:25.502328] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:22.868 [2024-07-22 16:06:25.502335] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:22.868 [2024-07-22 16:06:25.502345] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:22.868 [2024-07-22 16:06:25.502463] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:22.868 [2024-07-22 16:06:25.502475] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:22.868 [2024-07-22 16:06:25.502512] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.502547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.868 [2024-07-22 16:06:25.502578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.868 [2024-07-22 16:06:25.502661] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.868 [2024-07-22 16:06:25.502670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.868 [2024-07-22 16:06:25.502675] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502680] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d6d0) on tqpair=0x1d2e270 00:29:22.868 [2024-07-22 16:06:25.502689] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:22.868 [2024-07-22 16:06:25.502702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502708] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502713] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.502722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.868 [2024-07-22 16:06:25.502744] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.868 [2024-07-22 16:06:25.502819] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.868 [2024-07-22 16:06:25.502827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.868 [2024-07-22 16:06:25.502832] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502837] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d6d0) on tqpair=0x1d2e270 00:29:22.868 [2024-07-22 16:06:25.502844] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:22.868 [2024-07-22 16:06:25.502851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:22.868 [2024-07-22 16:06:25.502861] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:22.868 [2024-07-22 16:06:25.502905] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:22.868 [2024-07-22 16:06:25.502920] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502926] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.502931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.502942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.868 [2024-07-22 16:06:25.502967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.868 [2024-07-22 16:06:25.503110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.868 [2024-07-22 16:06:25.503125] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.868 [2024-07-22 16:06:25.503139] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503146] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2e270): datao=0, datal=4096, cccid=0 00:29:22.868 [2024-07-22 16:06:25.503153] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6d6d0) on tqpair(0x1d2e270): expected_datao=0, payload_size=4096 00:29:22.868 [2024-07-22 16:06:25.503165] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503171] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503182] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.868 [2024-07-22 16:06:25.503190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.868 [2024-07-22 16:06:25.503195] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503200] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d6d0) on tqpair=0x1d2e270 00:29:22.868 [2024-07-22 16:06:25.503213] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:22.868 [2024-07-22 16:06:25.503220] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:22.868 [2024-07-22 16:06:25.503226] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:22.868 [2024-07-22 16:06:25.503233] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:22.868 [2024-07-22 16:06:25.503239] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:22.868 [2024-07-22 16:06:25.503246] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:22.868 [2024-07-22 16:06:25.503263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:22.868 [2024-07-22 16:06:25.503274] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503285] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.503295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:22.868 [2024-07-22 16:06:25.503323] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.868 [2024-07-22 16:06:25.503411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.868 [2024-07-22 16:06:25.503420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.868 [2024-07-22 16:06:25.503425] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503430] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d6d0) on tqpair=0x1d2e270 00:29:22.868 [2024-07-22 16:06:25.503442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.503461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.868 [2024-07-22 16:06:25.503469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503475] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503479] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.503506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.868 [2024-07-22 16:06:25.503516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503526] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.503534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.868 [2024-07-22 16:06:25.503542] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503547] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.868 [2024-07-22 16:06:25.503552] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.868 [2024-07-22 16:06:25.503560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.868 [2024-07-22 16:06:25.503566] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:22.868 [2024-07-22 16:06:25.503583] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:22.868 [2024-07-22 16:06:25.503592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.503597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.503602] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2e270) 00:29:22.869 [2024-07-22 16:06:25.503611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.869 [2024-07-22 16:06:25.503639] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d6d0, cid 0, qid 0 00:29:22.869 [2024-07-22 16:06:25.503649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d830, cid 1, qid 0 00:29:22.869 [2024-07-22 16:06:25.503655] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d990, cid 2, qid 0 00:29:22.869 [2024-07-22 16:06:25.503662] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.869 [2024-07-22 16:06:25.503668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dc50, cid 4, qid 0 00:29:22.869 [2024-07-22 16:06:25.503793] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.869 [2024-07-22 16:06:25.503812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.869 [2024-07-22 16:06:25.503818] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.503824] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dc50) on tqpair=0x1d2e270 00:29:22.869 [2024-07-22 16:06:25.503832] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:22.869 [2024-07-22 16:06:25.503839] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:22.869 [2024-07-22 16:06:25.503855] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.503861] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.503866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2e270) 00:29:22.869 [2024-07-22 16:06:25.503875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.869 [2024-07-22 16:06:25.503899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dc50, cid 4, qid 0 00:29:22.869 [2024-07-22 16:06:25.503987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.869 [2024-07-22 16:06:25.503996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.869 [2024-07-22 16:06:25.504007] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504022] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2e270): datao=0, datal=4096, cccid=4 00:29:22.869 [2024-07-22 16:06:25.504032] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6dc50) on tqpair(0x1d2e270): expected_datao=0, payload_size=4096 00:29:22.869 [2024-07-22 16:06:25.504048] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504055] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.869 [2024-07-22 16:06:25.504082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.869 [2024-07-22 16:06:25.504087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504093] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dc50) on tqpair=0x1d2e270 00:29:22.869 [2024-07-22 16:06:25.504112] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:22.869 [2024-07-22 16:06:25.504157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2e270) 00:29:22.869 [2024-07-22 16:06:25.504199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.869 [2024-07-22 16:06:25.504209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d2e270) 00:29:22.869 [2024-07-22 16:06:25.504228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.869 [2024-07-22 16:06:25.504262] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dc50, cid 4, qid 0 00:29:22.869 [2024-07-22 16:06:25.504272] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6ddb0, cid 5, qid 0 00:29:22.869 [2024-07-22 16:06:25.504463] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.869 [2024-07-22 16:06:25.504498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.869 [2024-07-22 16:06:25.504507] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504513] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2e270): datao=0, datal=1024, cccid=4 00:29:22.869 [2024-07-22 16:06:25.504519] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6dc50) on tqpair(0x1d2e270): expected_datao=0, payload_size=1024 00:29:22.869 [2024-07-22 16:06:25.504529] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504534] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.869 [2024-07-22 16:06:25.504549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.869 [2024-07-22 16:06:25.504554] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6ddb0) on tqpair=0x1d2e270 00:29:22.869 [2024-07-22 16:06:25.504588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.869 [2024-07-22 16:06:25.504598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.869 [2024-07-22 16:06:25.504602] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504608] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dc50) on tqpair=0x1d2e270 00:29:22.869 [2024-07-22 16:06:25.504632] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504639] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504644] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2e270) 00:29:22.869 [2024-07-22 16:06:25.504654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.869 [2024-07-22 16:06:25.504686] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dc50, cid 4, qid 0 00:29:22.869 [2024-07-22 16:06:25.504801] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.869 [2024-07-22 16:06:25.504819] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.869 [2024-07-22 16:06:25.504826] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504831] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2e270): datao=0, datal=3072, cccid=4 00:29:22.869 [2024-07-22 16:06:25.504837] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6dc50) on tqpair(0x1d2e270): expected_datao=0, payload_size=3072 00:29:22.869 [2024-07-22 16:06:25.504847] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504852] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504863] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.869 [2024-07-22 16:06:25.504871] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.869 [2024-07-22 16:06:25.504875] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504881] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dc50) on tqpair=0x1d2e270 00:29:22.869 [2024-07-22 16:06:25.504894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504900] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.504905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d2e270) 00:29:22.869 [2024-07-22 16:06:25.504915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.869 [2024-07-22 16:06:25.504944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dc50, cid 4, qid 0 00:29:22.869 [2024-07-22 16:06:25.505054] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.869 [2024-07-22 16:06:25.505068] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.869 [2024-07-22 16:06:25.505074] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.505079] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d2e270): datao=0, datal=8, cccid=4 00:29:22.869 [2024-07-22 16:06:25.505085] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6dc50) on tqpair(0x1d2e270): expected_datao=0, payload_size=8 00:29:22.869 [2024-07-22 16:06:25.505095] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.869 [2024-07-22 16:06:25.505100] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.869 ===================================================== 00:29:22.869 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:22.869 ===================================================== 00:29:22.869 Controller Capabilities/Features 00:29:22.869 ================================ 00:29:22.869 Vendor ID: 0000 00:29:22.869 Subsystem Vendor ID: 0000 00:29:22.869 Serial Number: .................... 00:29:22.869 Model Number: ........................................ 00:29:22.869 Firmware Version: 24.01.1 00:29:22.869 Recommended Arb Burst: 0 00:29:22.869 IEEE OUI Identifier: 00 00 00 00:29:22.869 Multi-path I/O 00:29:22.869 May have multiple subsystem ports: No 00:29:22.869 May have multiple controllers: No 00:29:22.869 Associated with SR-IOV VF: No 00:29:22.869 Max Data Transfer Size: 131072 00:29:22.869 Max Number of Namespaces: 0 00:29:22.869 Max Number of I/O Queues: 1024 00:29:22.869 NVMe Specification Version (VS): 1.3 00:29:22.869 NVMe Specification Version (Identify): 1.3 00:29:22.869 Maximum Queue Entries: 128 00:29:22.869 Contiguous Queues Required: Yes 00:29:22.869 Arbitration Mechanisms Supported 00:29:22.869 Weighted Round Robin: Not Supported 00:29:22.869 Vendor Specific: Not Supported 00:29:22.869 Reset Timeout: 15000 ms 00:29:22.869 Doorbell Stride: 4 bytes 00:29:22.869 NVM Subsystem Reset: Not Supported 00:29:22.869 Command Sets Supported 00:29:22.869 NVM Command Set: Supported 00:29:22.869 Boot Partition: Not Supported 00:29:22.869 Memory Page Size Minimum: 4096 bytes 00:29:22.869 Memory Page Size Maximum: 4096 bytes 00:29:22.870 Persistent Memory Region: Not Supported 00:29:22.870 Optional Asynchronous Events Supported 00:29:22.870 Namespace Attribute Notices: Not Supported 00:29:22.870 Firmware Activation Notices: Not Supported 00:29:22.870 ANA Change Notices: Not Supported 00:29:22.870 PLE Aggregate Log Change Notices: Not Supported 00:29:22.870 LBA Status Info Alert Notices: Not Supported 00:29:22.870 EGE Aggregate Log Change Notices: Not Supported 00:29:22.870 Normal NVM Subsystem Shutdown event: Not Supported 00:29:22.870 Zone Descriptor Change Notices: Not Supported 00:29:22.870 Discovery Log Change Notices: Supported 00:29:22.870 Controller Attributes 00:29:22.870 128-bit Host Identifier: Not Supported 00:29:22.870 Non-Operational Permissive Mode: Not Supported 00:29:22.870 NVM Sets: Not Supported 00:29:22.870 Read Recovery Levels: Not Supported 00:29:22.870 Endurance Groups: Not Supported 00:29:22.870 Predictable Latency Mode: Not Supported 00:29:22.870 Traffic Based Keep ALive: Not Supported 00:29:22.870 Namespace Granularity: Not Supported 00:29:22.870 SQ Associations: Not Supported 00:29:22.870 UUID List: Not Supported 00:29:22.870 Multi-Domain Subsystem: Not Supported 00:29:22.870 Fixed Capacity Management: Not Supported 00:29:22.870 Variable Capacity Management: Not Supported 00:29:22.870 Delete Endurance Group: Not Supported 00:29:22.870 Delete NVM Set: Not Supported 00:29:22.870 Extended LBA Formats Supported: Not Supported 00:29:22.870 Flexible Data Placement Supported: Not Supported 00:29:22.870 00:29:22.870 Controller Memory Buffer Support 00:29:22.870 ================================ 00:29:22.870 Supported: No 00:29:22.870 00:29:22.870 Persistent Memory Region Support 00:29:22.870 ================================ 00:29:22.870 Supported: No 00:29:22.870 00:29:22.870 Admin Command Set Attributes 00:29:22.870 ============================ 00:29:22.870 Security Send/Receive: Not Supported 00:29:22.870 Format NVM: Not Supported 00:29:22.870 Firmware Activate/Download: Not Supported 00:29:22.870 Namespace Management: Not Supported 00:29:22.870 Device Self-Test: Not Supported 00:29:22.870 Directives: Not Supported 00:29:22.870 NVMe-MI: Not Supported 00:29:22.870 Virtualization Management: Not Supported 00:29:22.870 Doorbell Buffer Config: Not Supported 00:29:22.870 Get LBA Status Capability: Not Supported 00:29:22.870 Command & Feature Lockdown Capability: Not Supported 00:29:22.870 Abort Command Limit: 1 00:29:22.870 Async Event Request Limit: 4 00:29:22.870 Number of Firmware Slots: N/A 00:29:22.870 Firmware Slot 1 Read-Only: N/A 00:29:22.870 Fi[2024-07-22 16:06:25.505119] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.870 [2024-07-22 16:06:25.505128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.870 [2024-07-22 16:06:25.505132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.870 [2024-07-22 16:06:25.505138] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dc50) on tqpair=0x1d2e270 00:29:22.870 rmware Activation Without Reset: N/A 00:29:22.870 Multiple Update Detection Support: N/A 00:29:22.870 Firmware Update Granularity: No Information Provided 00:29:22.870 Per-Namespace SMART Log: No 00:29:22.870 Asymmetric Namespace Access Log Page: Not Supported 00:29:22.870 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:22.870 Command Effects Log Page: Not Supported 00:29:22.870 Get Log Page Extended Data: Supported 00:29:22.870 Telemetry Log Pages: Not Supported 00:29:22.870 Persistent Event Log Pages: Not Supported 00:29:22.870 Supported Log Pages Log Page: May Support 00:29:22.870 Commands Supported & Effects Log Page: Not Supported 00:29:22.870 Feature Identifiers & Effects Log Page:May Support 00:29:22.870 NVMe-MI Commands & Effects Log Page: May Support 00:29:22.870 Data Area 4 for Telemetry Log: Not Supported 00:29:22.870 Error Log Page Entries Supported: 128 00:29:22.870 Keep Alive: Not Supported 00:29:22.870 00:29:22.870 NVM Command Set Attributes 00:29:22.870 ========================== 00:29:22.870 Submission Queue Entry Size 00:29:22.870 Max: 1 00:29:22.870 Min: 1 00:29:22.870 Completion Queue Entry Size 00:29:22.870 Max: 1 00:29:22.870 Min: 1 00:29:22.870 Number of Namespaces: 0 00:29:22.870 Compare Command: Not Supported 00:29:22.870 Write Uncorrectable Command: Not Supported 00:29:22.870 Dataset Management Command: Not Supported 00:29:22.870 Write Zeroes Command: Not Supported 00:29:22.870 Set Features Save Field: Not Supported 00:29:22.870 Reservations: Not Supported 00:29:22.870 Timestamp: Not Supported 00:29:22.870 Copy: Not Supported 00:29:22.870 Volatile Write Cache: Not Present 00:29:22.870 Atomic Write Unit (Normal): 1 00:29:22.870 Atomic Write Unit (PFail): 1 00:29:22.870 Atomic Compare & Write Unit: 1 00:29:22.870 Fused Compare & Write: Supported 00:29:22.870 Scatter-Gather List 00:29:22.870 SGL Command Set: Supported 00:29:22.870 SGL Keyed: Supported 00:29:22.870 SGL Bit Bucket Descriptor: Not Supported 00:29:22.870 SGL Metadata Pointer: Not Supported 00:29:22.870 Oversized SGL: Not Supported 00:29:22.870 SGL Metadata Address: Not Supported 00:29:22.870 SGL Offset: Supported 00:29:22.870 Transport SGL Data Block: Not Supported 00:29:22.870 Replay Protected Memory Block: Not Supported 00:29:22.870 00:29:22.870 Firmware Slot Information 00:29:22.870 ========================= 00:29:22.870 Active slot: 0 00:29:22.870 00:29:22.870 00:29:22.870 Error Log 00:29:22.870 ========= 00:29:22.870 00:29:22.870 Active Namespaces 00:29:22.870 ================= 00:29:22.870 Discovery Log Page 00:29:22.870 ================== 00:29:22.870 Generation Counter: 2 00:29:22.870 Number of Records: 2 00:29:22.870 Record Format: 0 00:29:22.870 00:29:22.870 Discovery Log Entry 0 00:29:22.870 ---------------------- 00:29:22.870 Transport Type: 3 (TCP) 00:29:22.870 Address Family: 1 (IPv4) 00:29:22.870 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:22.870 Entry Flags: 00:29:22.870 Duplicate Returned Information: 1 00:29:22.870 Explicit Persistent Connection Support for Discovery: 1 00:29:22.870 Transport Requirements: 00:29:22.870 Secure Channel: Not Required 00:29:22.870 Port ID: 0 (0x0000) 00:29:22.870 Controller ID: 65535 (0xffff) 00:29:22.870 Admin Max SQ Size: 128 00:29:22.870 Transport Service Identifier: 4420 00:29:22.870 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:22.870 Transport Address: 10.0.0.2 00:29:22.870 Discovery Log Entry 1 00:29:22.870 ---------------------- 00:29:22.870 Transport Type: 3 (TCP) 00:29:22.870 Address Family: 1 (IPv4) 00:29:22.870 Subsystem Type: 2 (NVM Subsystem) 00:29:22.870 Entry Flags: 00:29:22.870 Duplicate Returned Information: 0 00:29:22.870 Explicit Persistent Connection Support for Discovery: 0 00:29:22.870 Transport Requirements: 00:29:22.870 Secure Channel: Not Required 00:29:22.870 Port ID: 0 (0x0000) 00:29:22.870 Controller ID: 65535 (0xffff) 00:29:22.870 Admin Max SQ Size: 128 00:29:22.870 Transport Service Identifier: 4420 00:29:22.870 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:22.870 Transport Address: 10.0.0.2 [2024-07-22 16:06:25.505280] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:22.870 [2024-07-22 16:06:25.505305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.870 [2024-07-22 16:06:25.505315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.870 [2024-07-22 16:06:25.505323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.870 [2024-07-22 16:06:25.505331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.870 [2024-07-22 16:06:25.505343] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.870 [2024-07-22 16:06:25.505348] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.870 [2024-07-22 16:06:25.505353] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.870 [2024-07-22 16:06:25.505364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.870 [2024-07-22 16:06:25.505392] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.870 [2024-07-22 16:06:25.505479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.509514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.509527] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.509533] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.509548] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.509554] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.509558] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.509570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.509610] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.509733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.509741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.509746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.509752] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.509760] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:22.871 [2024-07-22 16:06:25.509766] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:22.871 [2024-07-22 16:06:25.509780] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.509785] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.509790] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.509800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.509822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.509903] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.509911] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.509916] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.509921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.509936] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.509942] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.509947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.509956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.509977] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.510067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.510082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.510087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510093] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.510108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510118] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.510128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.510150] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.510231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.510239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.510244] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510249] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.510263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.510283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.510304] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.510376] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.510394] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.510400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510406] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.510421] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510427] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510431] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.510441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.510464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.510546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.510560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.510565] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510571] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.510586] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510597] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.510607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.510634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.510709] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.510727] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.510733] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510738] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.510754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510764] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.510778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.510802] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.510878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.510913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.510919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510925] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.510941] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.510952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.510962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.510987] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.511064] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.511082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.511088] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.511094] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.511108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.511114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.511119] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.511129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.511152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.511237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.511251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.511256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.511262] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.871 [2024-07-22 16:06:25.511276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.511282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.511287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.871 [2024-07-22 16:06:25.511296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.871 [2024-07-22 16:06:25.511318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.871 [2024-07-22 16:06:25.511391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.871 [2024-07-22 16:06:25.511400] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.871 [2024-07-22 16:06:25.511404] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.871 [2024-07-22 16:06:25.511410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.511424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511429] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511434] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.511444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.511464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.872 [2024-07-22 16:06:25.511549] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.872 [2024-07-22 16:06:25.511560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.872 [2024-07-22 16:06:25.511565] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511570] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.511584] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511590] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511595] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.511605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.511628] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.872 [2024-07-22 16:06:25.511699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.872 [2024-07-22 16:06:25.511712] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.872 [2024-07-22 16:06:25.511718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.511737] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511748] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.511758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.511780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.872 [2024-07-22 16:06:25.511850] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.872 [2024-07-22 16:06:25.511858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.872 [2024-07-22 16:06:25.511863] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511868] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.511882] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511888] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.511893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.511902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.511923] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.872 [2024-07-22 16:06:25.512018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.872 [2024-07-22 16:06:25.512027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.872 [2024-07-22 16:06:25.512031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512037] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.512050] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512056] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.512070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.512095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.872 [2024-07-22 16:06:25.512174] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.872 [2024-07-22 16:06:25.512189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.872 [2024-07-22 16:06:25.512195] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512200] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.512215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.512236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.512259] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.872 [2024-07-22 16:06:25.512339] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.872 [2024-07-22 16:06:25.512353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.872 [2024-07-22 16:06:25.512358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512364] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.512378] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512389] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.512398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.512420] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.872 [2024-07-22 16:06:25.512512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.872 [2024-07-22 16:06:25.512523] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.872 [2024-07-22 16:06:25.512528] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512533] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.512548] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512554] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.512568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.512591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.872 [2024-07-22 16:06:25.512671] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.872 [2024-07-22 16:06:25.512680] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.872 [2024-07-22 16:06:25.512685] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512690] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.512704] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512709] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512714] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.512724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.512744] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.872 [2024-07-22 16:06:25.512822] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.872 [2024-07-22 16:06:25.512830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.872 [2024-07-22 16:06:25.512835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512840] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.872 [2024-07-22 16:06:25.512862] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.872 [2024-07-22 16:06:25.512877] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.872 [2024-07-22 16:06:25.512886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.872 [2024-07-22 16:06:25.512909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.873 [2024-07-22 16:06:25.512989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.873 [2024-07-22 16:06:25.513005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.873 [2024-07-22 16:06:25.513010] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.513015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.873 [2024-07-22 16:06:25.513029] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.513035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.513040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.873 [2024-07-22 16:06:25.513050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.873 [2024-07-22 16:06:25.513082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.873 [2024-07-22 16:06:25.513174] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.873 [2024-07-22 16:06:25.513183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.873 [2024-07-22 16:06:25.513188] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.513194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.873 [2024-07-22 16:06:25.513208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.513214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.513219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.873 [2024-07-22 16:06:25.513228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.873 [2024-07-22 16:06:25.513249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.873 [2024-07-22 16:06:25.513329] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.873 [2024-07-22 16:06:25.513337] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.873 [2024-07-22 16:06:25.513342] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.513347] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.873 [2024-07-22 16:06:25.513361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.513366] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.513371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.873 [2024-07-22 16:06:25.513380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.873 [2024-07-22 16:06:25.513401] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.873 [2024-07-22 16:06:25.513477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.873 [2024-07-22 16:06:25.517503] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.873 [2024-07-22 16:06:25.517528] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.517535] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.873 [2024-07-22 16:06:25.517557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.517563] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.517568] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d2e270) 00:29:22.873 [2024-07-22 16:06:25.517579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.873 [2024-07-22 16:06:25.517612] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6daf0, cid 3, qid 0 00:29:22.873 [2024-07-22 16:06:25.517699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.873 [2024-07-22 16:06:25.517708] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.873 [2024-07-22 16:06:25.517713] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.517718] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6daf0) on tqpair=0x1d2e270 00:29:22.873 [2024-07-22 16:06:25.517729] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:29:22.873 00:29:22.873 16:06:25 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:22.873 [2024-07-22 16:06:25.563835] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:22.873 [2024-07-22 16:06:25.563889] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68214 ] 00:29:22.873 [2024-07-22 16:06:25.708532] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:22.873 [2024-07-22 16:06:25.708627] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:22.873 [2024-07-22 16:06:25.708637] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:22.873 [2024-07-22 16:06:25.708653] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:22.873 [2024-07-22 16:06:25.708668] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:29:22.873 [2024-07-22 16:06:25.708823] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:22.873 [2024-07-22 16:06:25.708886] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9f6270 0 00:29:22.873 [2024-07-22 16:06:25.714520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:22.873 [2024-07-22 16:06:25.714550] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:22.873 [2024-07-22 16:06:25.714558] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:22.873 [2024-07-22 16:06:25.714564] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:22.873 [2024-07-22 16:06:25.714616] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.714625] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.714630] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.873 [2024-07-22 16:06:25.714648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:22.873 [2024-07-22 16:06:25.714685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.873 [2024-07-22 16:06:25.722513] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.873 [2024-07-22 16:06:25.722543] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.873 [2024-07-22 16:06:25.722550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.722557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa356d0) on tqpair=0x9f6270 00:29:22.873 [2024-07-22 16:06:25.722576] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:22.873 [2024-07-22 16:06:25.722587] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:22.873 [2024-07-22 16:06:25.722596] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:22.873 [2024-07-22 16:06:25.722616] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.722623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.722628] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.873 [2024-07-22 16:06:25.722640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.873 [2024-07-22 16:06:25.722679] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.873 [2024-07-22 16:06:25.722733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.873 [2024-07-22 16:06:25.722742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.873 [2024-07-22 16:06:25.722747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.722753] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa356d0) on tqpair=0x9f6270 00:29:22.873 [2024-07-22 16:06:25.722761] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:22.873 [2024-07-22 16:06:25.722772] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:22.873 [2024-07-22 16:06:25.722782] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.722788] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.722793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.873 [2024-07-22 16:06:25.722803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.873 [2024-07-22 16:06:25.722826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.873 [2024-07-22 16:06:25.722892] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.873 [2024-07-22 16:06:25.722905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.873 [2024-07-22 16:06:25.722910] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.722915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa356d0) on tqpair=0x9f6270 00:29:22.873 [2024-07-22 16:06:25.722923] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:22.873 [2024-07-22 16:06:25.722936] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:22.873 [2024-07-22 16:06:25.722946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.722952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.722957] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.873 [2024-07-22 16:06:25.722967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.873 [2024-07-22 16:06:25.722993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.873 [2024-07-22 16:06:25.723046] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.873 [2024-07-22 16:06:25.723064] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.873 [2024-07-22 16:06:25.723070] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.723076] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa356d0) on tqpair=0x9f6270 00:29:22.873 [2024-07-22 16:06:25.723084] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:22.873 [2024-07-22 16:06:25.723099] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.873 [2024-07-22 16:06:25.723105] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.723120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.874 [2024-07-22 16:06:25.723144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.874 [2024-07-22 16:06:25.723202] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.874 [2024-07-22 16:06:25.723211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.874 [2024-07-22 16:06:25.723216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa356d0) on tqpair=0x9f6270 00:29:22.874 [2024-07-22 16:06:25.723228] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:22.874 [2024-07-22 16:06:25.723235] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:22.874 [2024-07-22 16:06:25.723246] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:22.874 [2024-07-22 16:06:25.723354] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:22.874 [2024-07-22 16:06:25.723361] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:22.874 [2024-07-22 16:06:25.723373] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.723393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.874 [2024-07-22 16:06:25.723416] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.874 [2024-07-22 16:06:25.723469] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.874 [2024-07-22 16:06:25.723478] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.874 [2024-07-22 16:06:25.723501] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723513] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa356d0) on tqpair=0x9f6270 00:29:22.874 [2024-07-22 16:06:25.723525] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:22.874 [2024-07-22 16:06:25.723546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723554] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.723570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.874 [2024-07-22 16:06:25.723598] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.874 [2024-07-22 16:06:25.723647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.874 [2024-07-22 16:06:25.723657] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.874 [2024-07-22 16:06:25.723662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723667] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa356d0) on tqpair=0x9f6270 00:29:22.874 [2024-07-22 16:06:25.723674] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:22.874 [2024-07-22 16:06:25.723681] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:22.874 [2024-07-22 16:06:25.723692] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:22.874 [2024-07-22 16:06:25.723711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:22.874 [2024-07-22 16:06:25.723725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723731] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.723746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.874 [2024-07-22 16:06:25.723772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.874 [2024-07-22 16:06:25.723866] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.874 [2024-07-22 16:06:25.723875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.874 [2024-07-22 16:06:25.723880] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723886] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9f6270): datao=0, datal=4096, cccid=0 00:29:22.874 [2024-07-22 16:06:25.723892] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa356d0) on tqpair(0x9f6270): expected_datao=0, payload_size=4096 00:29:22.874 [2024-07-22 16:06:25.723904] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723911] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.874 [2024-07-22 16:06:25.723930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.874 [2024-07-22 16:06:25.723935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.723940] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa356d0) on tqpair=0x9f6270 00:29:22.874 [2024-07-22 16:06:25.723952] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:22.874 [2024-07-22 16:06:25.723959] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:22.874 [2024-07-22 16:06:25.723965] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:22.874 [2024-07-22 16:06:25.723971] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:22.874 [2024-07-22 16:06:25.723978] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:22.874 [2024-07-22 16:06:25.723984] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:22.874 [2024-07-22 16:06:25.724002] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:22.874 [2024-07-22 16:06:25.724012] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724018] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.724033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:22.874 [2024-07-22 16:06:25.724057] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.874 [2024-07-22 16:06:25.724112] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.874 [2024-07-22 16:06:25.724131] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.874 [2024-07-22 16:06:25.724137] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724143] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa356d0) on tqpair=0x9f6270 00:29:22.874 [2024-07-22 16:06:25.724153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.724174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.874 [2024-07-22 16:06:25.724186] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724195] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724203] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.724214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.874 [2024-07-22 16:06:25.724222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724228] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724232] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.724240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.874 [2024-07-22 16:06:25.724248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724253] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724258] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.724266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.874 [2024-07-22 16:06:25.724273] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:22.874 [2024-07-22 16:06:25.724291] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:22.874 [2024-07-22 16:06:25.724302] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724307] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9f6270) 00:29:22.874 [2024-07-22 16:06:25.724322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.874 [2024-07-22 16:06:25.724350] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa356d0, cid 0, qid 0 00:29:22.874 [2024-07-22 16:06:25.724359] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35830, cid 1, qid 0 00:29:22.874 [2024-07-22 16:06:25.724366] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35990, cid 2, qid 0 00:29:22.874 [2024-07-22 16:06:25.724372] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:22.874 [2024-07-22 16:06:25.724379] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35c50, cid 4, qid 0 00:29:22.874 [2024-07-22 16:06:25.724501] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.874 [2024-07-22 16:06:25.724519] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.874 [2024-07-22 16:06:25.724525] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.874 [2024-07-22 16:06:25.724531] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35c50) on tqpair=0x9f6270 00:29:22.874 [2024-07-22 16:06:25.724538] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:22.875 [2024-07-22 16:06:25.724546] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.724558] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.724572] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.724581] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.724587] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.724592] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9f6270) 00:29:22.875 [2024-07-22 16:06:25.724602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:22.875 [2024-07-22 16:06:25.724629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35c50, cid 4, qid 0 00:29:22.875 [2024-07-22 16:06:25.724683] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.875 [2024-07-22 16:06:25.724692] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.875 [2024-07-22 16:06:25.724697] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.724702] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35c50) on tqpair=0x9f6270 00:29:22.875 [2024-07-22 16:06:25.724783] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.724807] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.724820] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.724826] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.724831] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9f6270) 00:29:22.875 [2024-07-22 16:06:25.724841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.875 [2024-07-22 16:06:25.724866] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35c50, cid 4, qid 0 00:29:22.875 [2024-07-22 16:06:25.724925] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.875 [2024-07-22 16:06:25.724934] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.875 [2024-07-22 16:06:25.724939] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.724944] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9f6270): datao=0, datal=4096, cccid=4 00:29:22.875 [2024-07-22 16:06:25.724950] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa35c50) on tqpair(0x9f6270): expected_datao=0, payload_size=4096 00:29:22.875 [2024-07-22 16:06:25.724961] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.724967] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.724978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.875 [2024-07-22 16:06:25.724986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.875 [2024-07-22 16:06:25.724991] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.724996] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35c50) on tqpair=0x9f6270 00:29:22.875 [2024-07-22 16:06:25.725016] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:22.875 [2024-07-22 16:06:25.725030] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725044] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725054] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725060] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725065] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9f6270) 00:29:22.875 [2024-07-22 16:06:25.725075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.875 [2024-07-22 16:06:25.725099] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35c50, cid 4, qid 0 00:29:22.875 [2024-07-22 16:06:25.725173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.875 [2024-07-22 16:06:25.725182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.875 [2024-07-22 16:06:25.725187] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725193] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9f6270): datao=0, datal=4096, cccid=4 00:29:22.875 [2024-07-22 16:06:25.725199] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa35c50) on tqpair(0x9f6270): expected_datao=0, payload_size=4096 00:29:22.875 [2024-07-22 16:06:25.725209] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725215] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.875 [2024-07-22 16:06:25.725234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.875 [2024-07-22 16:06:25.725239] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725244] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35c50) on tqpair=0x9f6270 00:29:22.875 [2024-07-22 16:06:25.725266] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725287] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725305] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9f6270) 00:29:22.875 [2024-07-22 16:06:25.725320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.875 [2024-07-22 16:06:25.725345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35c50, cid 4, qid 0 00:29:22.875 [2024-07-22 16:06:25.725406] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.875 [2024-07-22 16:06:25.725414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.875 [2024-07-22 16:06:25.725420] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725425] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9f6270): datao=0, datal=4096, cccid=4 00:29:22.875 [2024-07-22 16:06:25.725431] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa35c50) on tqpair(0x9f6270): expected_datao=0, payload_size=4096 00:29:22.875 [2024-07-22 16:06:25.725441] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725447] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725458] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.875 [2024-07-22 16:06:25.725466] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.875 [2024-07-22 16:06:25.725471] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725476] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35c50) on tqpair=0x9f6270 00:29:22.875 [2024-07-22 16:06:25.725501] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725529] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725545] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725552] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:22.875 [2024-07-22 16:06:25.725558] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:22.875 [2024-07-22 16:06:25.725566] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:22.875 [2024-07-22 16:06:25.725587] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725594] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9f6270) 00:29:22.875 [2024-07-22 16:06:25.725608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.875 [2024-07-22 16:06:25.725618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725629] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9f6270) 00:29:22.875 [2024-07-22 16:06:25.725637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.875 [2024-07-22 16:06:25.725669] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35c50, cid 4, qid 0 00:29:22.875 [2024-07-22 16:06:25.725680] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35db0, cid 5, qid 0 00:29:22.875 [2024-07-22 16:06:25.725744] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.875 [2024-07-22 16:06:25.725752] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.875 [2024-07-22 16:06:25.725757] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725763] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35c50) on tqpair=0x9f6270 00:29:22.875 [2024-07-22 16:06:25.725772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.875 [2024-07-22 16:06:25.725780] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.875 [2024-07-22 16:06:25.725784] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725790] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35db0) on tqpair=0x9f6270 00:29:22.875 [2024-07-22 16:06:25.725804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.875 [2024-07-22 16:06:25.725815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9f6270) 00:29:22.875 [2024-07-22 16:06:25.725824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.875 [2024-07-22 16:06:25.725846] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35db0, cid 5, qid 0 00:29:22.875 [2024-07-22 16:06:25.725899] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.875 [2024-07-22 16:06:25.725908] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.876 [2024-07-22 16:06:25.725913] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.876 [2024-07-22 16:06:25.725918] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35db0) on tqpair=0x9f6270 00:29:22.876 [2024-07-22 16:06:25.725932] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.876 [2024-07-22 16:06:25.725938] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.876 [2024-07-22 16:06:25.725943] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9f6270) 00:29:22.876 [2024-07-22 16:06:25.725961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.876 [2024-07-22 16:06:25.725991] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35db0, cid 5, qid 0 00:29:23.137 [2024-07-22 16:06:25.726039] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.137 [2024-07-22 16:06:25.726048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.137 [2024-07-22 16:06:25.726053] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35db0) on tqpair=0x9f6270 00:29:23.137 [2024-07-22 16:06:25.726072] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726079] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9f6270) 00:29:23.137 [2024-07-22 16:06:25.726093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.137 [2024-07-22 16:06:25.726116] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35db0, cid 5, qid 0 00:29:23.137 [2024-07-22 16:06:25.726163] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.137 [2024-07-22 16:06:25.726178] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.137 [2024-07-22 16:06:25.726184] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726190] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35db0) on tqpair=0x9f6270 00:29:23.137 [2024-07-22 16:06:25.726208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726220] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9f6270) 00:29:23.137 [2024-07-22 16:06:25.726230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.137 [2024-07-22 16:06:25.726240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726245] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726250] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9f6270) 00:29:23.137 [2024-07-22 16:06:25.726259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.137 [2024-07-22 16:06:25.726269] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726274] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726279] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9f6270) 00:29:23.137 [2024-07-22 16:06:25.726287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.137 [2024-07-22 16:06:25.726298] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726303] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.726308] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9f6270) 00:29:23.137 [2024-07-22 16:06:25.726317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.137 [2024-07-22 16:06:25.726341] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35db0, cid 5, qid 0 00:29:23.137 [2024-07-22 16:06:25.726350] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35c50, cid 4, qid 0 00:29:23.137 [2024-07-22 16:06:25.726357] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35f10, cid 6, qid 0 00:29:23.137 [2024-07-22 16:06:25.726363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36070, cid 7, qid 0 00:29:23.137 [2024-07-22 16:06:25.730518] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.137 [2024-07-22 16:06:25.730550] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.137 [2024-07-22 16:06:25.730561] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730567] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9f6270): datao=0, datal=8192, cccid=5 00:29:23.137 [2024-07-22 16:06:25.730574] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa35db0) on tqpair(0x9f6270): expected_datao=0, payload_size=8192 00:29:23.137 [2024-07-22 16:06:25.730585] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730592] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730599] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.137 [2024-07-22 16:06:25.730607] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.137 [2024-07-22 16:06:25.730612] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730617] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9f6270): datao=0, datal=512, cccid=4 00:29:23.137 [2024-07-22 16:06:25.730623] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa35c50) on tqpair(0x9f6270): expected_datao=0, payload_size=512 00:29:23.137 [2024-07-22 16:06:25.730633] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730638] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.137 [2024-07-22 16:06:25.730653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.137 [2024-07-22 16:06:25.730658] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730663] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9f6270): datao=0, datal=512, cccid=6 00:29:23.137 [2024-07-22 16:06:25.730669] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa35f10) on tqpair(0x9f6270): expected_datao=0, payload_size=512 00:29:23.137 [2024-07-22 16:06:25.730678] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730683] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.137 [2024-07-22 16:06:25.730698] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.137 [2024-07-22 16:06:25.730703] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730708] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9f6270): datao=0, datal=4096, cccid=7 00:29:23.137 [2024-07-22 16:06:25.730714] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa36070) on tqpair(0x9f6270): expected_datao=0, payload_size=4096 00:29:23.137 [2024-07-22 16:06:25.730724] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730729] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.137 [2024-07-22 16:06:25.730744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.137 [2024-07-22 16:06:25.730749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35db0) on tqpair=0x9f6270 00:29:23.137 [2024-07-22 16:06:25.730780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.137 [2024-07-22 16:06:25.730789] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.137 [2024-07-22 16:06:25.730794] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.137 [2024-07-22 16:06:25.730799] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35c50) on tqpair=0x9f6270 00:29:23.137 [2024-07-22 16:06:25.730812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.138 [2024-07-22 16:06:25.730820] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.138 [2024-07-22 16:06:25.730825] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.138 [2024-07-22 16:06:25.730830] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35f10) on tqpair=0x9f6270 00:29:23.138 [2024-07-22 16:06:25.730840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.138 [2024-07-22 16:06:25.730848] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.138 [2024-07-22 16:06:25.730853] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.138 [2024-07-22 16:06:25.730858] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36070) on tqpair=0x9f6270 00:29:23.138 ===================================================== 00:29:23.138 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.138 ===================================================== 00:29:23.138 Controller Capabilities/Features 00:29:23.138 ================================ 00:29:23.138 Vendor ID: 8086 00:29:23.138 Subsystem Vendor ID: 8086 00:29:23.138 Serial Number: SPDK00000000000001 00:29:23.138 Model Number: SPDK bdev Controller 00:29:23.138 Firmware Version: 24.01.1 00:29:23.138 Recommended Arb Burst: 6 00:29:23.138 IEEE OUI Identifier: e4 d2 5c 00:29:23.138 Multi-path I/O 00:29:23.138 May have multiple subsystem ports: Yes 00:29:23.138 May have multiple controllers: Yes 00:29:23.138 Associated with SR-IOV VF: No 00:29:23.138 Max Data Transfer Size: 131072 00:29:23.138 Max Number of Namespaces: 32 00:29:23.138 Max Number of I/O Queues: 127 00:29:23.138 NVMe Specification Version (VS): 1.3 00:29:23.138 NVMe Specification Version (Identify): 1.3 00:29:23.138 Maximum Queue Entries: 128 00:29:23.138 Contiguous Queues Required: Yes 00:29:23.138 Arbitration Mechanisms Supported 00:29:23.138 Weighted Round Robin: Not Supported 00:29:23.138 Vendor Specific: Not Supported 00:29:23.138 Reset Timeout: 15000 ms 00:29:23.138 Doorbell Stride: 4 bytes 00:29:23.138 NVM Subsystem Reset: Not Supported 00:29:23.138 Command Sets Supported 00:29:23.138 NVM Command Set: Supported 00:29:23.138 Boot Partition: Not Supported 00:29:23.138 Memory Page Size Minimum: 4096 bytes 00:29:23.138 Memory Page Size Maximum: 4096 bytes 00:29:23.138 Persistent Memory Region: Not Supported 00:29:23.138 Optional Asynchronous Events Supported 00:29:23.138 Namespace Attribute Notices: Supported 00:29:23.138 Firmware Activation Notices: Not Supported 00:29:23.138 ANA Change Notices: Not Supported 00:29:23.138 PLE Aggregate Log Change Notices: Not Supported 00:29:23.138 LBA Status Info Alert Notices: Not Supported 00:29:23.138 EGE Aggregate Log Change Notices: Not Supported 00:29:23.138 Normal NVM Subsystem Shutdown event: Not Supported 00:29:23.138 Zone Descriptor Change Notices: Not Supported 00:29:23.138 Discovery Log Change Notices: Not Supported 00:29:23.138 Controller Attributes 00:29:23.138 128-bit Host Identifier: Supported 00:29:23.138 Non-Operational Permissive Mode: Not Supported 00:29:23.138 NVM Sets: Not Supported 00:29:23.138 Read Recovery Levels: Not Supported 00:29:23.138 Endurance Groups: Not Supported 00:29:23.138 Predictable Latency Mode: Not Supported 00:29:23.138 Traffic Based Keep ALive: Not Supported 00:29:23.138 Namespace Granularity: Not Supported 00:29:23.138 SQ Associations: Not Supported 00:29:23.138 UUID List: Not Supported 00:29:23.138 Multi-Domain Subsystem: Not Supported 00:29:23.138 Fixed Capacity Management: Not Supported 00:29:23.138 Variable Capacity Management: Not Supported 00:29:23.138 Delete Endurance Group: Not Supported 00:29:23.138 Delete NVM Set: Not Supported 00:29:23.138 Extended LBA Formats Supported: Not Supported 00:29:23.138 Flexible Data Placement Supported: Not Supported 00:29:23.138 00:29:23.138 Controller Memory Buffer Support 00:29:23.138 ================================ 00:29:23.138 Supported: No 00:29:23.138 00:29:23.138 Persistent Memory Region Support 00:29:23.138 ================================ 00:29:23.138 Supported: No 00:29:23.138 00:29:23.138 Admin Command Set Attributes 00:29:23.138 ============================ 00:29:23.138 Security Send/Receive: Not Supported 00:29:23.138 Format NVM: Not Supported 00:29:23.138 Firmware Activate/Download: Not Supported 00:29:23.138 Namespace Management: Not Supported 00:29:23.138 Device Self-Test: Not Supported 00:29:23.138 Directives: Not Supported 00:29:23.138 NVMe-MI: Not Supported 00:29:23.138 Virtualization Management: Not Supported 00:29:23.138 Doorbell Buffer Config: Not Supported 00:29:23.138 Get LBA Status Capability: Not Supported 00:29:23.138 Command & Feature Lockdown Capability: Not Supported 00:29:23.138 Abort Command Limit: 4 00:29:23.138 Async Event Request Limit: 4 00:29:23.138 Number of Firmware Slots: N/A 00:29:23.138 Firmware Slot 1 Read-Only: N/A 00:29:23.138 Firmware Activation Without Reset: N/A 00:29:23.138 Multiple Update Detection Support: N/A 00:29:23.138 Firmware Update Granularity: No Information Provided 00:29:23.138 Per-Namespace SMART Log: No 00:29:23.138 Asymmetric Namespace Access Log Page: Not Supported 00:29:23.138 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:23.138 Command Effects Log Page: Supported 00:29:23.138 Get Log Page Extended Data: Supported 00:29:23.138 Telemetry Log Pages: Not Supported 00:29:23.138 Persistent Event Log Pages: Not Supported 00:29:23.138 Supported Log Pages Log Page: May Support 00:29:23.138 Commands Supported & Effects Log Page: Not Supported 00:29:23.138 Feature Identifiers & Effects Log Page:May Support 00:29:23.138 NVMe-MI Commands & Effects Log Page: May Support 00:29:23.138 Data Area 4 for Telemetry Log: Not Supported 00:29:23.138 Error Log Page Entries Supported: 128 00:29:23.138 Keep Alive: Supported 00:29:23.138 Keep Alive Granularity: 10000 ms 00:29:23.138 00:29:23.138 NVM Command Set Attributes 00:29:23.138 ========================== 00:29:23.138 Submission Queue Entry Size 00:29:23.138 Max: 64 00:29:23.138 Min: 64 00:29:23.138 Completion Queue Entry Size 00:29:23.138 Max: 16 00:29:23.138 Min: 16 00:29:23.138 Number of Namespaces: 32 00:29:23.138 Compare Command: Supported 00:29:23.138 Write Uncorrectable Command: Not Supported 00:29:23.138 Dataset Management Command: Supported 00:29:23.138 Write Zeroes Command: Supported 00:29:23.138 Set Features Save Field: Not Supported 00:29:23.138 Reservations: Supported 00:29:23.138 Timestamp: Not Supported 00:29:23.138 Copy: Supported 00:29:23.138 Volatile Write Cache: Present 00:29:23.138 Atomic Write Unit (Normal): 1 00:29:23.138 Atomic Write Unit (PFail): 1 00:29:23.138 Atomic Compare & Write Unit: 1 00:29:23.138 Fused Compare & Write: Supported 00:29:23.138 Scatter-Gather List 00:29:23.138 SGL Command Set: Supported 00:29:23.138 SGL Keyed: Supported 00:29:23.138 SGL Bit Bucket Descriptor: Not Supported 00:29:23.138 SGL Metadata Pointer: Not Supported 00:29:23.138 Oversized SGL: Not Supported 00:29:23.138 SGL Metadata Address: Not Supported 00:29:23.138 SGL Offset: Supported 00:29:23.138 Transport SGL Data Block: Not Supported 00:29:23.138 Replay Protected Memory Block: Not Supported 00:29:23.138 00:29:23.138 Firmware Slot Information 00:29:23.138 ========================= 00:29:23.138 Active slot: 1 00:29:23.138 Slot 1 Firmware Revision: 24.01.1 00:29:23.138 00:29:23.138 00:29:23.138 Commands Supported and Effects 00:29:23.138 ============================== 00:29:23.138 Admin Commands 00:29:23.138 -------------- 00:29:23.138 Get Log Page (02h): Supported 00:29:23.138 Identify (06h): Supported 00:29:23.138 Abort (08h): Supported 00:29:23.138 Set Features (09h): Supported 00:29:23.138 Get Features (0Ah): Supported 00:29:23.138 Asynchronous Event Request (0Ch): Supported 00:29:23.138 Keep Alive (18h): Supported 00:29:23.138 I/O Commands 00:29:23.138 ------------ 00:29:23.138 Flush (00h): Supported LBA-Change 00:29:23.138 Write (01h): Supported LBA-Change 00:29:23.138 Read (02h): Supported 00:29:23.138 Compare (05h): Supported 00:29:23.138 Write Zeroes (08h): Supported LBA-Change 00:29:23.138 Dataset Management (09h): Supported LBA-Change 00:29:23.138 Copy (19h): Supported LBA-Change 00:29:23.138 Unknown (79h): Supported LBA-Change 00:29:23.138 Unknown (7Ah): Supported 00:29:23.138 00:29:23.139 Error Log 00:29:23.139 ========= 00:29:23.139 00:29:23.139 Arbitration 00:29:23.139 =========== 00:29:23.139 Arbitration Burst: 1 00:29:23.139 00:29:23.139 Power Management 00:29:23.139 ================ 00:29:23.139 Number of Power States: 1 00:29:23.139 Current Power State: Power State #0 00:29:23.139 Power State #0: 00:29:23.139 Max Power: 0.00 W 00:29:23.139 Non-Operational State: Operational 00:29:23.139 Entry Latency: Not Reported 00:29:23.139 Exit Latency: Not Reported 00:29:23.139 Relative Read Throughput: 0 00:29:23.139 Relative Read Latency: 0 00:29:23.139 Relative Write Throughput: 0 00:29:23.139 Relative Write Latency: 0 00:29:23.139 Idle Power: Not Reported 00:29:23.139 Active Power: Not Reported 00:29:23.139 Non-Operational Permissive Mode: Not Supported 00:29:23.139 00:29:23.139 Health Information 00:29:23.139 ================== 00:29:23.139 Critical Warnings: 00:29:23.139 Available Spare Space: OK 00:29:23.139 Temperature: OK 00:29:23.139 Device Reliability: OK 00:29:23.139 Read Only: No 00:29:23.139 Volatile Memory Backup: OK 00:29:23.139 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:23.139 Temperature Threshold: [2024-07-22 16:06:25.731037] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731054] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9f6270) 00:29:23.139 [2024-07-22 16:06:25.731066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.139 [2024-07-22 16:06:25.731102] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa36070, cid 7, qid 0 00:29:23.139 [2024-07-22 16:06:25.731157] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.139 [2024-07-22 16:06:25.731166] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.139 [2024-07-22 16:06:25.731171] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731176] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa36070) on tqpair=0x9f6270 00:29:23.139 [2024-07-22 16:06:25.731236] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:23.139 [2024-07-22 16:06:25.731265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.139 [2024-07-22 16:06:25.731275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.139 [2024-07-22 16:06:25.731284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.139 [2024-07-22 16:06:25.731292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.139 [2024-07-22 16:06:25.731305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731315] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.139 [2024-07-22 16:06:25.731326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.139 [2024-07-22 16:06:25.731355] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.139 [2024-07-22 16:06:25.731403] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.139 [2024-07-22 16:06:25.731412] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.139 [2024-07-22 16:06:25.731417] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.139 [2024-07-22 16:06:25.731433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731444] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.139 [2024-07-22 16:06:25.731454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.139 [2024-07-22 16:06:25.731482] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.139 [2024-07-22 16:06:25.731566] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.139 [2024-07-22 16:06:25.731578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.139 [2024-07-22 16:06:25.731583] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.139 [2024-07-22 16:06:25.731596] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:23.139 [2024-07-22 16:06:25.731602] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:23.139 [2024-07-22 16:06:25.731617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731628] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.139 [2024-07-22 16:06:25.731638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.139 [2024-07-22 16:06:25.731666] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.139 [2024-07-22 16:06:25.731723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.139 [2024-07-22 16:06:25.731736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.139 [2024-07-22 16:06:25.731742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.139 [2024-07-22 16:06:25.731762] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731774] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.139 [2024-07-22 16:06:25.731783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.139 [2024-07-22 16:06:25.731806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.139 [2024-07-22 16:06:25.731852] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.139 [2024-07-22 16:06:25.731861] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.139 [2024-07-22 16:06:25.731866] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731871] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.139 [2024-07-22 16:06:25.731885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.731896] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.139 [2024-07-22 16:06:25.731905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.139 [2024-07-22 16:06:25.731934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.139 [2024-07-22 16:06:25.731982] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.139 [2024-07-22 16:06:25.731991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.139 [2024-07-22 16:06:25.731997] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.732002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.139 [2024-07-22 16:06:25.732016] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.732022] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.732028] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.139 [2024-07-22 16:06:25.732038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.139 [2024-07-22 16:06:25.732061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.139 [2024-07-22 16:06:25.732110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.139 [2024-07-22 16:06:25.732119] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.139 [2024-07-22 16:06:25.732123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.732129] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.139 [2024-07-22 16:06:25.732142] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.732149] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.732154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.139 [2024-07-22 16:06:25.732163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.139 [2024-07-22 16:06:25.732185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.139 [2024-07-22 16:06:25.732234] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.139 [2024-07-22 16:06:25.732243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.139 [2024-07-22 16:06:25.732248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.732253] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.139 [2024-07-22 16:06:25.732267] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.139 [2024-07-22 16:06:25.732273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732278] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.732288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.732309] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.732355] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.732369] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.732375] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732381] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.732395] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732401] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732406] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.732416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.732438] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.732502] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.732512] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.732517] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732522] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.732537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732548] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.732557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.732582] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.732631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.732640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.732645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732650] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.732664] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732670] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732675] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.732685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.732706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.732756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.732765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.732770] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732775] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.732789] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732795] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.732809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.732831] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.732875] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.732884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.732889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732894] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.732908] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732914] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.732919] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.732929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.732950] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.733008] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.733028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.733036] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733044] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.733067] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733074] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.733090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.733120] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.733170] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.733179] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.733184] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733189] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.733203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733209] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733215] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.733224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.733246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.733310] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.733319] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.733324] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733333] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.733352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.733376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.733400] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.733456] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.733464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.733469] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733475] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.733504] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733517] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.733527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.733552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.733599] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.733608] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.733612] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733618] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.733631] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733638] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733643] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.733652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.140 [2024-07-22 16:06:25.733674] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.140 [2024-07-22 16:06:25.733718] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.140 [2024-07-22 16:06:25.733727] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.140 [2024-07-22 16:06:25.733732] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733737] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.140 [2024-07-22 16:06:25.733750] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.140 [2024-07-22 16:06:25.733762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.140 [2024-07-22 16:06:25.733771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.141 [2024-07-22 16:06:25.733792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.141 [2024-07-22 16:06:25.733837] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.141 [2024-07-22 16:06:25.733845] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.141 [2024-07-22 16:06:25.733850] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.733856] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.141 [2024-07-22 16:06:25.733869] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.733875] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.733880] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.141 [2024-07-22 16:06:25.733889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.141 [2024-07-22 16:06:25.733910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.141 [2024-07-22 16:06:25.733963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.141 [2024-07-22 16:06:25.733971] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.141 [2024-07-22 16:06:25.733976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.733982] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.141 [2024-07-22 16:06:25.733995] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734001] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734006] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.141 [2024-07-22 16:06:25.734015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.141 [2024-07-22 16:06:25.734036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.141 [2024-07-22 16:06:25.734085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.141 [2024-07-22 16:06:25.734093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.141 [2024-07-22 16:06:25.734098] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734104] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.141 [2024-07-22 16:06:25.734117] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734128] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.141 [2024-07-22 16:06:25.734138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.141 [2024-07-22 16:06:25.734158] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.141 [2024-07-22 16:06:25.734211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.141 [2024-07-22 16:06:25.734219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.141 [2024-07-22 16:06:25.734224] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.141 [2024-07-22 16:06:25.734243] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734249] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.141 [2024-07-22 16:06:25.734263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.141 [2024-07-22 16:06:25.734284] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.141 [2024-07-22 16:06:25.734343] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.141 [2024-07-22 16:06:25.734363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.141 [2024-07-22 16:06:25.734373] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734382] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.141 [2024-07-22 16:06:25.734408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734426] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.734435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.141 [2024-07-22 16:06:25.734450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.141 [2024-07-22 16:06:25.738508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.141 [2024-07-22 16:06:25.738552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.141 [2024-07-22 16:06:25.738564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.141 [2024-07-22 16:06:25.738570] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.738576] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.141 [2024-07-22 16:06:25.738597] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.738605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.738610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9f6270) 00:29:23.141 [2024-07-22 16:06:25.738622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.141 [2024-07-22 16:06:25.738659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa35af0, cid 3, qid 0 00:29:23.141 [2024-07-22 16:06:25.738711] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.141 [2024-07-22 16:06:25.738720] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.141 [2024-07-22 16:06:25.738725] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.141 [2024-07-22 16:06:25.738731] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa35af0) on tqpair=0x9f6270 00:29:23.141 [2024-07-22 16:06:25.738742] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:29:23.141 0 Kelvin (-273 Celsius) 00:29:23.141 Available Spare: 0% 00:29:23.141 Available Spare Threshold: 0% 00:29:23.141 Life Percentage Used: 0% 00:29:23.141 Data Units Read: 0 00:29:23.141 Data Units Written: 0 00:29:23.141 Host Read Commands: 0 00:29:23.141 Host Write Commands: 0 00:29:23.141 Controller Busy Time: 0 minutes 00:29:23.141 Power Cycles: 0 00:29:23.141 Power On Hours: 0 hours 00:29:23.141 Unsafe Shutdowns: 0 00:29:23.141 Unrecoverable Media Errors: 0 00:29:23.141 Lifetime Error Log Entries: 0 00:29:23.141 Warning Temperature Time: 0 minutes 00:29:23.141 Critical Temperature Time: 0 minutes 00:29:23.141 00:29:23.141 Number of Queues 00:29:23.141 ================ 00:29:23.141 Number of I/O Submission Queues: 127 00:29:23.141 Number of I/O Completion Queues: 127 00:29:23.141 00:29:23.141 Active Namespaces 00:29:23.141 ================= 00:29:23.141 Namespace ID:1 00:29:23.141 Error Recovery Timeout: Unlimited 00:29:23.141 Command Set Identifier: NVM (00h) 00:29:23.141 Deallocate: Supported 00:29:23.141 Deallocated/Unwritten Error: Not Supported 00:29:23.141 Deallocated Read Value: Unknown 00:29:23.141 Deallocate in Write Zeroes: Not Supported 00:29:23.141 Deallocated Guard Field: 0xFFFF 00:29:23.141 Flush: Supported 00:29:23.141 Reservation: Supported 00:29:23.141 Namespace Sharing Capabilities: Multiple Controllers 00:29:23.141 Size (in LBAs): 131072 (0GiB) 00:29:23.141 Capacity (in LBAs): 131072 (0GiB) 00:29:23.141 Utilization (in LBAs): 131072 (0GiB) 00:29:23.141 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:23.141 EUI64: ABCDEF0123456789 00:29:23.141 UUID: 8b05f0bd-514a-4543-8560-e4dabf708016 00:29:23.141 Thin Provisioning: Not Supported 00:29:23.141 Per-NS Atomic Units: Yes 00:29:23.141 Atomic Boundary Size (Normal): 0 00:29:23.141 Atomic Boundary Size (PFail): 0 00:29:23.141 Atomic Boundary Offset: 0 00:29:23.141 Maximum Single Source Range Length: 65535 00:29:23.141 Maximum Copy Length: 65535 00:29:23.141 Maximum Source Range Count: 1 00:29:23.141 NGUID/EUI64 Never Reused: No 00:29:23.141 Namespace Write Protected: No 00:29:23.141 Number of LBA Formats: 1 00:29:23.141 Current LBA Format: LBA Format #00 00:29:23.141 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:23.141 00:29:23.141 16:06:25 -- host/identify.sh@51 -- # sync 00:29:23.141 16:06:25 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:23.141 16:06:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.141 16:06:25 -- common/autotest_common.sh@10 -- # set +x 00:29:23.141 16:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.141 16:06:25 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:23.141 16:06:25 -- host/identify.sh@56 -- # nvmftestfini 00:29:23.141 16:06:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:23.141 16:06:25 -- nvmf/common.sh@116 -- # sync 00:29:23.141 16:06:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:23.141 16:06:25 -- nvmf/common.sh@119 -- # set +e 00:29:23.141 16:06:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:23.141 16:06:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:23.141 rmmod nvme_tcp 00:29:23.141 rmmod nvme_fabrics 00:29:23.141 rmmod nvme_keyring 00:29:23.141 16:06:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:23.141 16:06:25 -- nvmf/common.sh@123 -- # set -e 00:29:23.141 16:06:25 -- nvmf/common.sh@124 -- # return 0 00:29:23.141 16:06:25 -- nvmf/common.sh@477 -- # '[' -n 68171 ']' 00:29:23.141 16:06:25 -- nvmf/common.sh@478 -- # killprocess 68171 00:29:23.141 16:06:25 -- common/autotest_common.sh@926 -- # '[' -z 68171 ']' 00:29:23.141 16:06:25 -- common/autotest_common.sh@930 -- # kill -0 68171 00:29:23.141 16:06:25 -- common/autotest_common.sh@931 -- # uname 00:29:23.141 16:06:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:23.141 16:06:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68171 00:29:23.141 16:06:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:23.141 16:06:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:23.141 killing process with pid 68171 00:29:23.142 16:06:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68171' 00:29:23.142 16:06:25 -- common/autotest_common.sh@945 -- # kill 68171 00:29:23.142 [2024-07-22 16:06:25.895505] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:23.142 16:06:25 -- common/autotest_common.sh@950 -- # wait 68171 00:29:23.400 16:06:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:23.400 16:06:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:23.400 16:06:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:23.400 16:06:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.400 16:06:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:23.400 16:06:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.400 16:06:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.400 16:06:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.400 16:06:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:29:23.400 00:29:23.400 real 0m2.569s 00:29:23.400 user 0m7.552s 00:29:23.400 sys 0m0.549s 00:29:23.400 16:06:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.400 ************************************ 00:29:23.400 END TEST nvmf_identify 00:29:23.400 16:06:26 -- common/autotest_common.sh@10 -- # set +x 00:29:23.400 ************************************ 00:29:23.400 16:06:26 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:23.400 16:06:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:23.400 16:06:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:23.400 16:06:26 -- common/autotest_common.sh@10 -- # set +x 00:29:23.400 ************************************ 00:29:23.400 START TEST nvmf_perf 00:29:23.400 ************************************ 00:29:23.400 16:06:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:23.400 * Looking for test storage... 00:29:23.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:23.400 16:06:26 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:23.400 16:06:26 -- nvmf/common.sh@7 -- # uname -s 00:29:23.400 16:06:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.400 16:06:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.400 16:06:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.400 16:06:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.400 16:06:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.400 16:06:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.400 16:06:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.400 16:06:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.400 16:06:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.400 16:06:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.658 16:06:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:29:23.658 16:06:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:29:23.658 16:06:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.658 16:06:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.658 16:06:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:23.658 16:06:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:23.658 16:06:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.658 16:06:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.658 16:06:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.658 16:06:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.658 16:06:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.658 16:06:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.658 16:06:26 -- paths/export.sh@5 -- # export PATH 00:29:23.658 16:06:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.658 16:06:26 -- nvmf/common.sh@46 -- # : 0 00:29:23.658 16:06:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:23.658 16:06:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:23.658 16:06:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:23.658 16:06:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.658 16:06:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.658 16:06:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:23.658 16:06:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:23.658 16:06:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:23.658 16:06:26 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:23.658 16:06:26 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:23.658 16:06:26 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:23.658 16:06:26 -- host/perf.sh@17 -- # nvmftestinit 00:29:23.658 16:06:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:23.658 16:06:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.658 16:06:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:23.658 16:06:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:23.658 16:06:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:23.658 16:06:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.658 16:06:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.658 16:06:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.658 16:06:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:29:23.658 16:06:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:29:23.658 16:06:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:29:23.658 16:06:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:29:23.658 16:06:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:29:23.658 16:06:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:29:23.658 16:06:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.658 16:06:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.658 16:06:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:23.658 16:06:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:29:23.658 16:06:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:23.658 16:06:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:23.658 16:06:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:23.658 16:06:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.658 16:06:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:23.658 16:06:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:23.658 16:06:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:23.658 16:06:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:23.658 16:06:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:29:23.658 16:06:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:29:23.658 Cannot find device "nvmf_tgt_br" 00:29:23.658 16:06:26 -- nvmf/common.sh@154 -- # true 00:29:23.658 16:06:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:29:23.658 Cannot find device "nvmf_tgt_br2" 00:29:23.658 16:06:26 -- nvmf/common.sh@155 -- # true 00:29:23.658 16:06:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:29:23.658 16:06:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:29:23.658 Cannot find device "nvmf_tgt_br" 00:29:23.658 16:06:26 -- nvmf/common.sh@157 -- # true 00:29:23.658 16:06:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:29:23.658 Cannot find device "nvmf_tgt_br2" 00:29:23.658 16:06:26 -- nvmf/common.sh@158 -- # true 00:29:23.658 16:06:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:29:23.658 16:06:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:29:23.659 16:06:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:23.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:23.659 16:06:26 -- nvmf/common.sh@161 -- # true 00:29:23.659 16:06:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:23.659 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:23.659 16:06:26 -- nvmf/common.sh@162 -- # true 00:29:23.659 16:06:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:29:23.659 16:06:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:23.659 16:06:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:23.659 16:06:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:23.659 16:06:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:23.659 16:06:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:23.659 16:06:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:23.659 16:06:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:23.659 16:06:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:23.659 16:06:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:29:23.659 16:06:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:29:23.659 16:06:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:29:23.659 16:06:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:29:23.659 16:06:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:23.659 16:06:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:23.659 16:06:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:23.659 16:06:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:29:23.659 16:06:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:29:23.917 16:06:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:29:23.917 16:06:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:23.917 16:06:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:23.917 16:06:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:23.917 16:06:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:23.917 16:06:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:29:23.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:29:23.917 00:29:23.917 --- 10.0.0.2 ping statistics --- 00:29:23.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.917 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:29:23.917 16:06:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:29:23.917 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:23.917 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:29:23.917 00:29:23.917 --- 10.0.0.3 ping statistics --- 00:29:23.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.917 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:29:23.917 16:06:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:23.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:29:23.917 00:29:23.917 --- 10.0.0.1 ping statistics --- 00:29:23.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.917 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:29:23.917 16:06:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.917 16:06:26 -- nvmf/common.sh@421 -- # return 0 00:29:23.917 16:06:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:23.917 16:06:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.917 16:06:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:23.917 16:06:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:23.917 16:06:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.917 16:06:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:23.917 16:06:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:23.917 16:06:26 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:23.917 16:06:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:23.917 16:06:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:23.917 16:06:26 -- common/autotest_common.sh@10 -- # set +x 00:29:23.917 16:06:26 -- nvmf/common.sh@469 -- # nvmfpid=68383 00:29:23.917 16:06:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:23.917 16:06:26 -- nvmf/common.sh@470 -- # waitforlisten 68383 00:29:23.917 16:06:26 -- common/autotest_common.sh@819 -- # '[' -z 68383 ']' 00:29:23.917 16:06:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.917 16:06:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:23.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.917 16:06:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.917 16:06:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:23.917 16:06:26 -- common/autotest_common.sh@10 -- # set +x 00:29:23.917 [2024-07-22 16:06:26.662432] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:23.917 [2024-07-22 16:06:26.662548] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.175 [2024-07-22 16:06:26.801173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.175 [2024-07-22 16:06:26.872552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:24.175 [2024-07-22 16:06:26.872960] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.175 [2024-07-22 16:06:26.872988] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.175 [2024-07-22 16:06:26.872999] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.175 [2024-07-22 16:06:26.873129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.175 [2024-07-22 16:06:26.873218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.175 [2024-07-22 16:06:26.873370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.175 [2024-07-22 16:06:26.873382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.108 16:06:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:25.108 16:06:27 -- common/autotest_common.sh@852 -- # return 0 00:29:25.108 16:06:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:25.108 16:06:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:25.108 16:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:25.108 16:06:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.108 16:06:27 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:25.108 16:06:27 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:29:25.366 16:06:28 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:29:25.366 16:06:28 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:25.624 16:06:28 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:29:25.624 16:06:28 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:26.190 16:06:28 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:26.190 16:06:28 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:29:26.190 16:06:28 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:26.190 16:06:28 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:26.190 16:06:28 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:26.447 [2024-07-22 16:06:29.149755] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.447 16:06:29 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.705 16:06:29 -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:26.705 16:06:29 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.963 16:06:29 -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:26.963 16:06:29 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:27.221 16:06:29 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.479 [2024-07-22 16:06:30.263341] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.479 16:06:30 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:27.736 16:06:30 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:29:27.736 16:06:30 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:27.736 16:06:30 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:27.736 16:06:30 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:29.111 Initializing NVMe Controllers 00:29:29.111 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:29.111 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:29.111 Initialization complete. Launching workers. 00:29:29.111 ======================================================== 00:29:29.111 Latency(us) 00:29:29.111 Device Information : IOPS MiB/s Average min max 00:29:29.111 PCIE (0000:00:06.0) NSID 1 from core 0: 26288.27 102.69 1217.01 323.88 6050.50 00:29:29.111 ======================================================== 00:29:29.111 Total : 26288.27 102.69 1217.01 323.88 6050.50 00:29:29.111 00:29:29.111 16:06:31 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:30.483 Initializing NVMe Controllers 00:29:30.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:30.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:30.483 Initialization complete. Launching workers. 00:29:30.483 ======================================================== 00:29:30.483 Latency(us) 00:29:30.483 Device Information : IOPS MiB/s Average min max 00:29:30.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3051.80 11.92 327.30 114.07 5259.89 00:29:30.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8094.97 6014.00 12074.63 00:29:30.483 ======================================================== 00:29:30.483 Total : 3176.30 12.41 631.77 114.07 12074.63 00:29:30.483 00:29:30.483 16:06:33 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:31.856 Initializing NVMe Controllers 00:29:31.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:31.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:31.857 Initialization complete. Launching workers. 00:29:31.857 ======================================================== 00:29:31.857 Latency(us) 00:29:31.857 Device Information : IOPS MiB/s Average min max 00:29:31.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8575.11 33.50 3733.83 510.02 7750.32 00:29:31.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4002.91 15.64 8032.50 6497.27 12670.09 00:29:31.857 ======================================================== 00:29:31.857 Total : 12578.03 49.13 5101.87 510.02 12670.09 00:29:31.857 00:29:31.857 16:06:34 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:29:31.857 16:06:34 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.386 Initializing NVMe Controllers 00:29:34.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.386 Controller IO queue size 128, less than required. 00:29:34.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.386 Controller IO queue size 128, less than required. 00:29:34.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:34.386 Initialization complete. Launching workers. 00:29:34.386 ======================================================== 00:29:34.386 Latency(us) 00:29:34.386 Device Information : IOPS MiB/s Average min max 00:29:34.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1502.03 375.51 87350.50 40113.16 151227.92 00:29:34.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 672.29 168.07 199658.89 97856.82 330781.75 00:29:34.386 ======================================================== 00:29:34.386 Total : 2174.32 543.58 122075.74 40113.16 330781.75 00:29:34.386 00:29:34.386 16:06:37 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:34.386 No valid NVMe controllers or AIO or URING devices found 00:29:34.386 Initializing NVMe Controllers 00:29:34.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.386 Controller IO queue size 128, less than required. 00:29:34.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.386 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:34.386 Controller IO queue size 128, less than required. 00:29:34.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.386 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:29:34.386 WARNING: Some requested NVMe devices were skipped 00:29:34.387 16:06:37 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:36.937 Initializing NVMe Controllers 00:29:36.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.937 Controller IO queue size 128, less than required. 00:29:36.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.937 Controller IO queue size 128, less than required. 00:29:36.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:36.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:36.937 Initialization complete. Launching workers. 00:29:36.937 00:29:36.937 ==================== 00:29:36.937 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:36.937 TCP transport: 00:29:36.937 polls: 7753 00:29:36.937 idle_polls: 0 00:29:36.937 sock_completions: 7753 00:29:36.937 nvme_completions: 6628 00:29:36.937 submitted_requests: 10098 00:29:36.937 queued_requests: 1 00:29:36.937 00:29:36.937 ==================== 00:29:36.937 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:36.937 TCP transport: 00:29:36.937 polls: 8427 00:29:36.937 idle_polls: 0 00:29:36.937 sock_completions: 8427 00:29:36.937 nvme_completions: 6401 00:29:36.937 submitted_requests: 9750 00:29:36.937 queued_requests: 1 00:29:36.937 ======================================================== 00:29:36.937 Latency(us) 00:29:36.937 Device Information : IOPS MiB/s Average min max 00:29:36.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1720.43 430.11 75154.58 36151.27 128043.21 00:29:36.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1663.93 415.98 77723.46 29248.36 128786.75 00:29:36.937 ======================================================== 00:29:36.937 Total : 3384.36 846.09 76417.58 29248.36 128786.75 00:29:36.937 00:29:36.937 16:06:39 -- host/perf.sh@66 -- # sync 00:29:36.937 16:06:39 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.195 16:06:40 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:37.195 16:06:40 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:29:37.195 16:06:40 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:37.454 16:06:40 -- host/perf.sh@72 -- # ls_guid=594f394c-4d2b-4da9-a435-954d7e0333f7 00:29:37.454 16:06:40 -- host/perf.sh@73 -- # get_lvs_free_mb 594f394c-4d2b-4da9-a435-954d7e0333f7 00:29:37.454 16:06:40 -- common/autotest_common.sh@1343 -- # local lvs_uuid=594f394c-4d2b-4da9-a435-954d7e0333f7 00:29:37.454 16:06:40 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:37.454 16:06:40 -- common/autotest_common.sh@1345 -- # local fc 00:29:37.454 16:06:40 -- common/autotest_common.sh@1346 -- # local cs 00:29:37.454 16:06:40 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:38.087 16:06:40 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:38.087 { 00:29:38.087 "uuid": "594f394c-4d2b-4da9-a435-954d7e0333f7", 00:29:38.087 "name": "lvs_0", 00:29:38.087 "base_bdev": "Nvme0n1", 00:29:38.087 "total_data_clusters": 1278, 00:29:38.087 "free_clusters": 1278, 00:29:38.087 "block_size": 4096, 00:29:38.087 "cluster_size": 4194304 00:29:38.087 } 00:29:38.087 ]' 00:29:38.087 16:06:40 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="594f394c-4d2b-4da9-a435-954d7e0333f7") .free_clusters' 00:29:38.087 16:06:40 -- common/autotest_common.sh@1348 -- # fc=1278 00:29:38.087 16:06:40 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="594f394c-4d2b-4da9-a435-954d7e0333f7") .cluster_size' 00:29:38.087 5112 00:29:38.087 16:06:40 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:38.087 16:06:40 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:29:38.087 16:06:40 -- common/autotest_common.sh@1353 -- # echo 5112 00:29:38.087 16:06:40 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:29:38.087 16:06:40 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 594f394c-4d2b-4da9-a435-954d7e0333f7 lbd_0 5112 00:29:38.345 16:06:41 -- host/perf.sh@80 -- # lb_guid=fb476ad1-192e-4380-b031-e970768858cb 00:29:38.345 16:06:41 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore fb476ad1-192e-4380-b031-e970768858cb lvs_n_0 00:29:38.603 16:06:41 -- host/perf.sh@83 -- # ls_nested_guid=2189dc55-6895-4ace-aae5-b126d5ce92dc 00:29:38.603 16:06:41 -- host/perf.sh@84 -- # get_lvs_free_mb 2189dc55-6895-4ace-aae5-b126d5ce92dc 00:29:38.603 16:06:41 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2189dc55-6895-4ace-aae5-b126d5ce92dc 00:29:38.603 16:06:41 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:38.603 16:06:41 -- common/autotest_common.sh@1345 -- # local fc 00:29:38.603 16:06:41 -- common/autotest_common.sh@1346 -- # local cs 00:29:38.603 16:06:41 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:38.861 16:06:41 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:38.861 { 00:29:38.861 "uuid": "594f394c-4d2b-4da9-a435-954d7e0333f7", 00:29:38.861 "name": "lvs_0", 00:29:38.861 "base_bdev": "Nvme0n1", 00:29:38.861 "total_data_clusters": 1278, 00:29:38.861 "free_clusters": 0, 00:29:38.861 "block_size": 4096, 00:29:38.861 "cluster_size": 4194304 00:29:38.861 }, 00:29:38.861 { 00:29:38.861 "uuid": "2189dc55-6895-4ace-aae5-b126d5ce92dc", 00:29:38.861 "name": "lvs_n_0", 00:29:38.861 "base_bdev": "fb476ad1-192e-4380-b031-e970768858cb", 00:29:38.861 "total_data_clusters": 1276, 00:29:38.861 "free_clusters": 1276, 00:29:38.861 "block_size": 4096, 00:29:38.861 "cluster_size": 4194304 00:29:38.861 } 00:29:38.861 ]' 00:29:38.861 16:06:41 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2189dc55-6895-4ace-aae5-b126d5ce92dc") .free_clusters' 00:29:39.118 16:06:41 -- common/autotest_common.sh@1348 -- # fc=1276 00:29:39.118 16:06:41 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2189dc55-6895-4ace-aae5-b126d5ce92dc") .cluster_size' 00:29:39.118 5104 00:29:39.118 16:06:41 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:39.118 16:06:41 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:29:39.118 16:06:41 -- common/autotest_common.sh@1353 -- # echo 5104 00:29:39.118 16:06:41 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:29:39.118 16:06:41 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2189dc55-6895-4ace-aae5-b126d5ce92dc lbd_nest_0 5104 00:29:39.376 16:06:42 -- host/perf.sh@88 -- # lb_nested_guid=26838d23-b40d-42c3-84c3-914ca67d9000 00:29:39.376 16:06:42 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:39.941 16:06:42 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:39.941 16:06:42 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 26838d23-b40d-42c3-84c3-914ca67d9000 00:29:40.200 16:06:42 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:40.458 16:06:43 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:40.458 16:06:43 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:40.458 16:06:43 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:40.458 16:06:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:40.458 16:06:43 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:40.716 No valid NVMe controllers or AIO or URING devices found 00:29:40.716 Initializing NVMe Controllers 00:29:40.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.716 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:29:40.716 WARNING: Some requested NVMe devices were skipped 00:29:40.716 16:06:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:40.716 16:06:43 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.915 Initializing NVMe Controllers 00:29:52.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:52.915 Initialization complete. Launching workers. 00:29:52.915 ======================================================== 00:29:52.915 Latency(us) 00:29:52.915 Device Information : IOPS MiB/s Average min max 00:29:52.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1027.59 128.45 971.89 313.47 7591.14 00:29:52.915 ======================================================== 00:29:52.915 Total : 1027.59 128.45 971.89 313.47 7591.14 00:29:52.915 00:29:52.915 16:06:53 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:52.915 16:06:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:52.915 16:06:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.915 No valid NVMe controllers or AIO or URING devices found 00:29:52.915 Initializing NVMe Controllers 00:29:52.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.915 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:29:52.915 WARNING: Some requested NVMe devices were skipped 00:29:52.915 16:06:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:52.915 16:06:54 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.889 Initializing NVMe Controllers 00:30:02.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:02.889 Initialization complete. Launching workers. 00:30:02.889 ======================================================== 00:30:02.889 Latency(us) 00:30:02.889 Device Information : IOPS MiB/s Average min max 00:30:02.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1368.06 171.01 23419.91 5390.55 67799.68 00:30:02.889 ======================================================== 00:30:02.889 Total : 1368.06 171.01 23419.91 5390.55 67799.68 00:30:02.889 00:30:02.889 16:07:04 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:02.889 16:07:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:02.889 16:07:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.889 No valid NVMe controllers or AIO or URING devices found 00:30:02.889 Initializing NVMe Controllers 00:30:02.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.889 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:30:02.889 WARNING: Some requested NVMe devices were skipped 00:30:02.889 16:07:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:02.889 16:07:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.861 Initializing NVMe Controllers 00:30:12.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.861 Controller IO queue size 128, less than required. 00:30:12.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:12.861 Initialization complete. Launching workers. 00:30:12.861 ======================================================== 00:30:12.861 Latency(us) 00:30:12.861 Device Information : IOPS MiB/s Average min max 00:30:12.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4006.95 500.87 32009.43 7703.48 69353.72 00:30:12.861 ======================================================== 00:30:12.861 Total : 4006.95 500.87 32009.43 7703.48 69353.72 00:30:12.861 00:30:12.861 16:07:14 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:12.861 16:07:15 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 26838d23-b40d-42c3-84c3-914ca67d9000 00:30:12.861 16:07:15 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:13.119 16:07:15 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fb476ad1-192e-4380-b031-e970768858cb 00:30:13.377 16:07:16 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:13.635 16:07:16 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:13.636 16:07:16 -- host/perf.sh@114 -- # nvmftestfini 00:30:13.636 16:07:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:13.636 16:07:16 -- nvmf/common.sh@116 -- # sync 00:30:13.636 16:07:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:13.636 16:07:16 -- nvmf/common.sh@119 -- # set +e 00:30:13.636 16:07:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:13.636 16:07:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:13.893 rmmod nvme_tcp 00:30:13.893 rmmod nvme_fabrics 00:30:13.893 rmmod nvme_keyring 00:30:13.893 16:07:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:13.893 16:07:16 -- nvmf/common.sh@123 -- # set -e 00:30:13.893 16:07:16 -- nvmf/common.sh@124 -- # return 0 00:30:13.893 16:07:16 -- nvmf/common.sh@477 -- # '[' -n 68383 ']' 00:30:13.893 16:07:16 -- nvmf/common.sh@478 -- # killprocess 68383 00:30:13.893 16:07:16 -- common/autotest_common.sh@926 -- # '[' -z 68383 ']' 00:30:13.893 16:07:16 -- common/autotest_common.sh@930 -- # kill -0 68383 00:30:13.893 16:07:16 -- common/autotest_common.sh@931 -- # uname 00:30:13.893 16:07:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:13.893 16:07:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68383 00:30:13.893 killing process with pid 68383 00:30:13.893 16:07:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:13.893 16:07:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:13.893 16:07:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68383' 00:30:13.893 16:07:16 -- common/autotest_common.sh@945 -- # kill 68383 00:30:13.893 16:07:16 -- common/autotest_common.sh@950 -- # wait 68383 00:30:15.270 16:07:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:15.270 16:07:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:15.270 16:07:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:15.270 16:07:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:15.270 16:07:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:15.270 16:07:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.270 16:07:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:15.270 16:07:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.270 16:07:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:30:15.270 ************************************ 00:30:15.270 END TEST nvmf_perf 00:30:15.270 ************************************ 00:30:15.270 00:30:15.270 real 0m51.842s 00:30:15.270 user 3m14.972s 00:30:15.270 sys 0m13.722s 00:30:15.270 16:07:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:15.270 16:07:18 -- common/autotest_common.sh@10 -- # set +x 00:30:15.270 16:07:18 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:15.270 16:07:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:15.270 16:07:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:15.270 16:07:18 -- common/autotest_common.sh@10 -- # set +x 00:30:15.270 ************************************ 00:30:15.270 START TEST nvmf_fio_host 00:30:15.270 ************************************ 00:30:15.270 16:07:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:15.528 * Looking for test storage... 00:30:15.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:15.528 16:07:18 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:15.528 16:07:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.528 16:07:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.528 16:07:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.528 16:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.528 16:07:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.528 16:07:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.528 16:07:18 -- paths/export.sh@5 -- # export PATH 00:30:15.528 16:07:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.528 16:07:18 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:15.528 16:07:18 -- nvmf/common.sh@7 -- # uname -s 00:30:15.528 16:07:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.528 16:07:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.528 16:07:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.528 16:07:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.528 16:07:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.528 16:07:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.528 16:07:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.528 16:07:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.528 16:07:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.528 16:07:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.528 16:07:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:30:15.528 16:07:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:30:15.528 16:07:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.528 16:07:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.528 16:07:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:15.528 16:07:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:15.528 16:07:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.528 16:07:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.528 16:07:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.528 16:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.529 16:07:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.529 16:07:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.529 16:07:18 -- paths/export.sh@5 -- # export PATH 00:30:15.529 16:07:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.529 16:07:18 -- nvmf/common.sh@46 -- # : 0 00:30:15.529 16:07:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:15.529 16:07:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:15.529 16:07:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:15.529 16:07:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.529 16:07:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.529 16:07:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:15.529 16:07:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:15.529 16:07:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:15.529 16:07:18 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:15.529 16:07:18 -- host/fio.sh@14 -- # nvmftestinit 00:30:15.529 16:07:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:15.529 16:07:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.529 16:07:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:15.529 16:07:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:15.529 16:07:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:15.529 16:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.529 16:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:15.529 16:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.529 16:07:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:30:15.529 16:07:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:30:15.529 16:07:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:30:15.529 16:07:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:30:15.529 16:07:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:30:15.529 16:07:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:30:15.529 16:07:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.529 16:07:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.529 16:07:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:15.529 16:07:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:30:15.529 16:07:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:15.529 16:07:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:15.529 16:07:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:15.529 16:07:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.529 16:07:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:15.529 16:07:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:15.529 16:07:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:15.529 16:07:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:15.529 16:07:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:30:15.529 16:07:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:30:15.529 Cannot find device "nvmf_tgt_br" 00:30:15.529 16:07:18 -- nvmf/common.sh@154 -- # true 00:30:15.529 16:07:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:30:15.529 Cannot find device "nvmf_tgt_br2" 00:30:15.529 16:07:18 -- nvmf/common.sh@155 -- # true 00:30:15.529 16:07:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:30:15.529 16:07:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:30:15.529 Cannot find device "nvmf_tgt_br" 00:30:15.529 16:07:18 -- nvmf/common.sh@157 -- # true 00:30:15.529 16:07:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:30:15.529 Cannot find device "nvmf_tgt_br2" 00:30:15.529 16:07:18 -- nvmf/common.sh@158 -- # true 00:30:15.529 16:07:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:30:15.529 16:07:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:30:15.529 16:07:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:15.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:15.529 16:07:18 -- nvmf/common.sh@161 -- # true 00:30:15.529 16:07:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:15.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:15.529 16:07:18 -- nvmf/common.sh@162 -- # true 00:30:15.529 16:07:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:30:15.529 16:07:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:15.529 16:07:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:15.529 16:07:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:15.529 16:07:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:15.529 16:07:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:15.529 16:07:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:15.529 16:07:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:15.529 16:07:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:15.788 16:07:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:30:15.788 16:07:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:30:15.788 16:07:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:30:15.788 16:07:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:30:15.788 16:07:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:15.788 16:07:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:15.788 16:07:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:15.788 16:07:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:30:15.788 16:07:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:30:15.788 16:07:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:30:15.788 16:07:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:15.788 16:07:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:15.788 16:07:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:15.788 16:07:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:15.788 16:07:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:30:15.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:30:15.788 00:30:15.788 --- 10.0.0.2 ping statistics --- 00:30:15.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.788 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:30:15.788 16:07:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:30:15.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:15.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:30:15.788 00:30:15.788 --- 10.0.0.3 ping statistics --- 00:30:15.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.788 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:30:15.788 16:07:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:15.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:30:15.788 00:30:15.788 --- 10.0.0.1 ping statistics --- 00:30:15.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.788 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:30:15.788 16:07:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.788 16:07:18 -- nvmf/common.sh@421 -- # return 0 00:30:15.788 16:07:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:15.788 16:07:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.788 16:07:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:15.788 16:07:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:15.788 16:07:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.788 16:07:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:15.788 16:07:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:15.788 16:07:18 -- host/fio.sh@16 -- # [[ y != y ]] 00:30:15.788 16:07:18 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:15.788 16:07:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:15.788 16:07:18 -- common/autotest_common.sh@10 -- # set +x 00:30:15.788 16:07:18 -- host/fio.sh@24 -- # nvmfpid=69215 00:30:15.788 16:07:18 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:15.788 16:07:18 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:15.788 16:07:18 -- host/fio.sh@28 -- # waitforlisten 69215 00:30:15.788 16:07:18 -- common/autotest_common.sh@819 -- # '[' -z 69215 ']' 00:30:15.788 16:07:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.788 16:07:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:15.788 16:07:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.788 16:07:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:15.788 16:07:18 -- common/autotest_common.sh@10 -- # set +x 00:30:15.788 [2024-07-22 16:07:18.602587] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:15.788 [2024-07-22 16:07:18.602687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.048 [2024-07-22 16:07:18.740867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.048 [2024-07-22 16:07:18.809612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:16.048 [2024-07-22 16:07:18.809959] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.048 [2024-07-22 16:07:18.810099] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.048 [2024-07-22 16:07:18.810255] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.048 [2024-07-22 16:07:18.810582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.048 [2024-07-22 16:07:18.810702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.048 [2024-07-22 16:07:18.811121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.048 [2024-07-22 16:07:18.811136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.984 16:07:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:16.984 16:07:19 -- common/autotest_common.sh@852 -- # return 0 00:30:16.984 16:07:19 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:17.242 [2024-07-22 16:07:19.862861] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.242 16:07:19 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:17.242 16:07:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:17.242 16:07:19 -- common/autotest_common.sh@10 -- # set +x 00:30:17.242 16:07:19 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:17.500 Malloc1 00:30:17.500 16:07:20 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.758 16:07:20 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:18.016 16:07:20 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.273 [2024-07-22 16:07:21.020142] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.273 16:07:21 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.533 16:07:21 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:30:18.533 16:07:21 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.533 16:07:21 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.533 16:07:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:18.533 16:07:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.533 16:07:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:18.533 16:07:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:18.533 16:07:21 -- common/autotest_common.sh@1320 -- # shift 00:30:18.533 16:07:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:18.533 16:07:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.533 16:07:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:18.533 16:07:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:18.533 16:07:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:18.533 16:07:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:18.533 16:07:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:18.533 16:07:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.533 16:07:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:18.533 16:07:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:18.533 16:07:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:18.533 16:07:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:18.533 16:07:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:18.533 16:07:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:18.533 16:07:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:18.792 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:18.792 fio-3.35 00:30:18.792 Starting 1 thread 00:30:21.325 00:30:21.325 test: (groupid=0, jobs=1): err= 0: pid=69298: Mon Jul 22 16:07:23 2024 00:30:21.325 read: IOPS=8989, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2006msec) 00:30:21.325 slat (usec): min=2, max=191, avg= 2.62, stdev= 1.95 00:30:21.325 clat (usec): min=1517, max=12893, avg=7392.59, stdev=654.10 00:30:21.325 lat (usec): min=1555, max=12896, avg=7395.21, stdev=653.92 00:30:21.325 clat percentiles (usec): 00:30:21.325 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 6718], 20.00th=[ 6915], 00:30:21.325 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:30:21.325 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8586], 00:30:21.325 | 99.00th=[ 9372], 99.50th=[10028], 99.90th=[11994], 99.95th=[12387], 00:30:21.325 | 99.99th=[12911] 00:30:21.325 bw ( KiB/s): min=35248, max=36352, per=99.94%, avg=35934.00, stdev=524.37, samples=4 00:30:21.325 iops : min= 8812, max= 9088, avg=8983.50, stdev=131.09, samples=4 00:30:21.325 write: IOPS=9012, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2006msec); 0 zone resets 00:30:21.325 slat (usec): min=2, max=574, avg= 2.77, stdev= 4.45 00:30:21.325 clat (usec): min=1310, max=12382, avg=6763.27, stdev=597.35 00:30:21.325 lat (usec): min=1318, max=12385, avg=6766.04, stdev=597.26 00:30:21.325 clat percentiles (usec): 00:30:21.325 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6325], 00:30:21.325 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6783], 00:30:21.325 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7439], 95.00th=[ 7767], 00:30:21.325 | 99.00th=[ 8455], 99.50th=[ 9634], 99.90th=[10683], 99.95th=[11469], 00:30:21.325 | 99.99th=[12387] 00:30:21.325 bw ( KiB/s): min=34560, max=36616, per=99.96%, avg=36034.00, stdev=990.29, samples=4 00:30:21.325 iops : min= 8640, max= 9154, avg=9008.50, stdev=247.57, samples=4 00:30:21.325 lat (msec) : 2=0.04%, 4=0.12%, 10=99.40%, 20=0.44% 00:30:21.325 cpu : usr=68.58%, sys=22.89%, ctx=9, majf=0, minf=5 00:30:21.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:21.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.325 issued rwts: total=18032,18079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.325 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.325 00:30:21.325 Run status group 0 (all jobs): 00:30:21.325 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.9MB), run=2006-2006msec 00:30:21.325 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.1MB), run=2006-2006msec 00:30:21.325 16:07:23 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:21.325 16:07:23 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:21.325 16:07:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:21.325 16:07:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.325 16:07:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:21.325 16:07:23 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:21.325 16:07:23 -- common/autotest_common.sh@1320 -- # shift 00:30:21.325 16:07:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:21.325 16:07:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.325 16:07:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:21.325 16:07:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:21.325 16:07:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:21.325 16:07:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:21.325 16:07:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:21.325 16:07:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.325 16:07:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:21.325 16:07:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:21.325 16:07:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:21.325 16:07:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:21.325 16:07:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:21.325 16:07:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:21.325 16:07:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:21.325 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:21.325 fio-3.35 00:30:21.325 Starting 1 thread 00:30:23.855 00:30:23.855 test: (groupid=0, jobs=1): err= 0: pid=69341: Mon Jul 22 16:07:26 2024 00:30:23.855 read: IOPS=7975, BW=125MiB/s (131MB/s)(250MiB/2003msec) 00:30:23.855 slat (usec): min=3, max=131, avg= 4.43, stdev= 2.06 00:30:23.855 clat (usec): min=2578, max=18047, avg=8633.89, stdev=2755.12 00:30:23.855 lat (usec): min=2582, max=18052, avg=8638.33, stdev=2755.48 00:30:23.855 clat percentiles (usec): 00:30:23.855 | 1.00th=[ 4146], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 6259], 00:30:23.855 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 8094], 60.00th=[ 8848], 00:30:23.855 | 70.00th=[ 9765], 80.00th=[10945], 90.00th=[12780], 95.00th=[13960], 00:30:23.855 | 99.00th=[15926], 99.50th=[16450], 99.90th=[17433], 99.95th=[17957], 00:30:23.855 | 99.99th=[17957] 00:30:23.855 bw ( KiB/s): min=53216, max=73984, per=50.91%, avg=64968.00, stdev=9082.45, samples=4 00:30:23.855 iops : min= 3326, max= 4624, avg=4060.50, stdev=567.65, samples=4 00:30:23.855 write: IOPS=4397, BW=68.7MiB/s (72.0MB/s)(133MiB/1933msec); 0 zone resets 00:30:23.855 slat (usec): min=37, max=244, avg=41.51, stdev= 6.70 00:30:23.855 clat (usec): min=3113, max=23135, avg=13059.82, stdev=2745.54 00:30:23.856 lat (usec): min=3152, max=23187, avg=13101.33, stdev=2748.52 00:30:23.856 clat percentiles (usec): 00:30:23.856 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:30:23.856 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:30:23.856 | 70.00th=[13829], 80.00th=[15139], 90.00th=[16909], 95.00th=[18482], 00:30:23.856 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22938], 99.95th=[22938], 00:30:23.856 | 99.99th=[23200] 00:30:23.856 bw ( KiB/s): min=55296, max=76768, per=95.80%, avg=67400.00, stdev=9367.36, samples=4 00:30:23.856 iops : min= 3456, max= 4798, avg=4212.50, stdev=585.46, samples=4 00:30:23.856 lat (msec) : 4=0.49%, 10=49.15%, 20=49.56%, 50=0.79% 00:30:23.856 cpu : usr=79.68%, sys=13.73%, ctx=47, majf=0, minf=14 00:30:23.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:23.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:23.856 issued rwts: total=15975,8500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.856 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:23.856 00:30:23.856 Run status group 0 (all jobs): 00:30:23.856 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=250MiB (262MB), run=2003-2003msec 00:30:23.856 WRITE: bw=68.7MiB/s (72.0MB/s), 68.7MiB/s-68.7MiB/s (72.0MB/s-72.0MB/s), io=133MiB (139MB), run=1933-1933msec 00:30:23.856 16:07:26 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.856 16:07:26 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:23.856 16:07:26 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:23.856 16:07:26 -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:23.856 16:07:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:23.856 16:07:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:23.856 16:07:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:23.856 16:07:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:23.856 16:07:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:23.856 16:07:26 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:30:23.856 16:07:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:30:23.856 16:07:26 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:30:24.113 Nvme0n1 00:30:24.113 16:07:26 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:24.371 16:07:27 -- host/fio.sh@53 -- # ls_guid=4470bbde-7d06-4e0e-a7c2-98f85f02facd 00:30:24.371 16:07:27 -- host/fio.sh@54 -- # get_lvs_free_mb 4470bbde-7d06-4e0e-a7c2-98f85f02facd 00:30:24.371 16:07:27 -- common/autotest_common.sh@1343 -- # local lvs_uuid=4470bbde-7d06-4e0e-a7c2-98f85f02facd 00:30:24.371 16:07:27 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:24.371 16:07:27 -- common/autotest_common.sh@1345 -- # local fc 00:30:24.371 16:07:27 -- common/autotest_common.sh@1346 -- # local cs 00:30:24.371 16:07:27 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:24.650 16:07:27 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:24.650 { 00:30:24.650 "uuid": "4470bbde-7d06-4e0e-a7c2-98f85f02facd", 00:30:24.650 "name": "lvs_0", 00:30:24.650 "base_bdev": "Nvme0n1", 00:30:24.650 "total_data_clusters": 4, 00:30:24.650 "free_clusters": 4, 00:30:24.650 "block_size": 4096, 00:30:24.650 "cluster_size": 1073741824 00:30:24.650 } 00:30:24.650 ]' 00:30:24.650 16:07:27 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="4470bbde-7d06-4e0e-a7c2-98f85f02facd") .free_clusters' 00:30:24.650 16:07:27 -- common/autotest_common.sh@1348 -- # fc=4 00:30:24.650 16:07:27 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="4470bbde-7d06-4e0e-a7c2-98f85f02facd") .cluster_size' 00:30:24.908 4096 00:30:24.908 16:07:27 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:30:24.908 16:07:27 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:30:24.908 16:07:27 -- common/autotest_common.sh@1353 -- # echo 4096 00:30:24.908 16:07:27 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:30:24.908 1a6210c2-4292-4e06-a157-39fc5be2a707 00:30:25.166 16:07:27 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:25.166 16:07:28 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:25.424 16:07:28 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:25.989 16:07:28 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.989 16:07:28 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.989 16:07:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:25.989 16:07:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:25.989 16:07:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:25.989 16:07:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:25.989 16:07:28 -- common/autotest_common.sh@1320 -- # shift 00:30:25.989 16:07:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:25.989 16:07:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.989 16:07:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:25.989 16:07:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:25.989 16:07:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:25.989 16:07:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:25.989 16:07:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:25.989 16:07:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.989 16:07:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:25.989 16:07:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:25.989 16:07:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:25.989 16:07:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:25.989 16:07:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:25.989 16:07:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:25.989 16:07:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.989 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:25.989 fio-3.35 00:30:25.989 Starting 1 thread 00:30:28.518 00:30:28.518 test: (groupid=0, jobs=1): err= 0: pid=69450: Mon Jul 22 16:07:30 2024 00:30:28.518 read: IOPS=6528, BW=25.5MiB/s (26.7MB/s)(51.2MiB/2008msec) 00:30:28.518 slat (usec): min=2, max=262, avg= 2.71, stdev= 3.06 00:30:28.518 clat (usec): min=2704, max=17261, avg=10213.95, stdev=988.38 00:30:28.518 lat (usec): min=2711, max=17264, avg=10216.65, stdev=988.27 00:30:28.518 clat percentiles (usec): 00:30:28.518 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:30:28.518 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:30:28.518 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11338], 95.00th=[11863], 00:30:28.518 | 99.00th=[13173], 99.50th=[13435], 99.90th=[15008], 99.95th=[15926], 00:30:28.518 | 99.99th=[17171] 00:30:28.518 bw ( KiB/s): min=24192, max=26952, per=99.92%, avg=26094.00, stdev=1292.79, samples=4 00:30:28.518 iops : min= 6048, max= 6738, avg=6523.50, stdev=323.20, samples=4 00:30:28.518 write: IOPS=6538, BW=25.5MiB/s (26.8MB/s)(51.3MiB/2008msec); 0 zone resets 00:30:28.518 slat (usec): min=2, max=187, avg= 2.81, stdev= 2.12 00:30:28.518 clat (usec): min=1814, max=16022, avg=9290.84, stdev=944.39 00:30:28.518 lat (usec): min=1825, max=16024, avg=9293.66, stdev=944.42 00:30:28.518 clat percentiles (usec): 00:30:28.518 | 1.00th=[ 7439], 5.00th=[ 8029], 10.00th=[ 8225], 20.00th=[ 8586], 00:30:28.518 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:30:28.518 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[10945], 00:30:28.518 | 99.00th=[11994], 99.50th=[12256], 99.90th=[14746], 99.95th=[15795], 00:30:28.518 | 99.99th=[16057] 00:30:28.518 bw ( KiB/s): min=25224, max=26624, per=99.91%, avg=26130.00, stdev=645.52, samples=4 00:30:28.518 iops : min= 6306, max= 6656, avg=6532.50, stdev=161.38, samples=4 00:30:28.518 lat (msec) : 2=0.01%, 4=0.09%, 10=62.86%, 20=37.04% 00:30:28.518 cpu : usr=70.75%, sys=22.57%, ctx=8, majf=0, minf=29 00:30:28.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:28.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:28.519 issued rwts: total=13110,13129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:28.519 00:30:28.519 Run status group 0 (all jobs): 00:30:28.519 READ: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=51.2MiB (53.7MB), run=2008-2008msec 00:30:28.519 WRITE: bw=25.5MiB/s (26.8MB/s), 25.5MiB/s-25.5MiB/s (26.8MB/s-26.8MB/s), io=51.3MiB (53.8MB), run=2008-2008msec 00:30:28.519 16:07:31 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:28.519 16:07:31 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:28.777 16:07:31 -- host/fio.sh@64 -- # ls_nested_guid=cea636dc-05fd-40a7-b042-85fbda7834f9 00:30:28.777 16:07:31 -- host/fio.sh@65 -- # get_lvs_free_mb cea636dc-05fd-40a7-b042-85fbda7834f9 00:30:28.777 16:07:31 -- common/autotest_common.sh@1343 -- # local lvs_uuid=cea636dc-05fd-40a7-b042-85fbda7834f9 00:30:28.777 16:07:31 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:28.777 16:07:31 -- common/autotest_common.sh@1345 -- # local fc 00:30:28.777 16:07:31 -- common/autotest_common.sh@1346 -- # local cs 00:30:28.777 16:07:31 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:29.035 16:07:31 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:29.035 { 00:30:29.035 "uuid": "4470bbde-7d06-4e0e-a7c2-98f85f02facd", 00:30:29.035 "name": "lvs_0", 00:30:29.035 "base_bdev": "Nvme0n1", 00:30:29.035 "total_data_clusters": 4, 00:30:29.035 "free_clusters": 0, 00:30:29.035 "block_size": 4096, 00:30:29.035 "cluster_size": 1073741824 00:30:29.035 }, 00:30:29.035 { 00:30:29.035 "uuid": "cea636dc-05fd-40a7-b042-85fbda7834f9", 00:30:29.035 "name": "lvs_n_0", 00:30:29.035 "base_bdev": "1a6210c2-4292-4e06-a157-39fc5be2a707", 00:30:29.035 "total_data_clusters": 1022, 00:30:29.035 "free_clusters": 1022, 00:30:29.035 "block_size": 4096, 00:30:29.035 "cluster_size": 4194304 00:30:29.035 } 00:30:29.035 ]' 00:30:29.035 16:07:31 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="cea636dc-05fd-40a7-b042-85fbda7834f9") .free_clusters' 00:30:29.293 16:07:31 -- common/autotest_common.sh@1348 -- # fc=1022 00:30:29.293 16:07:31 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="cea636dc-05fd-40a7-b042-85fbda7834f9") .cluster_size' 00:30:29.293 16:07:31 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:29.293 16:07:31 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:30:29.293 16:07:31 -- common/autotest_common.sh@1353 -- # echo 4088 00:30:29.293 4088 00:30:29.293 16:07:31 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:30:29.552 0bdbdb84-a22b-4540-9d1d-754ec64d823c 00:30:29.552 16:07:32 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:29.810 16:07:32 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:30.067 16:07:32 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:30.326 16:07:32 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:30.326 16:07:32 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:30.326 16:07:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:30.326 16:07:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:30.326 16:07:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:30.326 16:07:32 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:30.326 16:07:32 -- common/autotest_common.sh@1320 -- # shift 00:30:30.326 16:07:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:30.326 16:07:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:30.326 16:07:32 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:30.326 16:07:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:30.326 16:07:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:30.326 16:07:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:30.326 16:07:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:30.326 16:07:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:30.326 16:07:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:30.326 16:07:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:30.326 16:07:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:30.326 16:07:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:30.326 16:07:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:30.326 16:07:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:30.326 16:07:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:30.326 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:30.326 fio-3.35 00:30:30.326 Starting 1 thread 00:30:32.854 00:30:32.854 test: (groupid=0, jobs=1): err= 0: pid=69528: Mon Jul 22 16:07:35 2024 00:30:32.854 read: IOPS=5580, BW=21.8MiB/s (22.9MB/s)(43.8MiB/2009msec) 00:30:32.854 slat (usec): min=2, max=374, avg= 2.74, stdev= 4.38 00:30:32.854 clat (usec): min=3267, max=20500, avg=12021.08, stdev=1256.79 00:30:32.854 lat (usec): min=3276, max=20504, avg=12023.81, stdev=1256.65 00:30:32.854 clat percentiles (usec): 00:30:32.854 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:30:32.854 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:30:32.854 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13566], 95.00th=[14222], 00:30:32.854 | 99.00th=[15533], 99.50th=[16581], 99.90th=[17695], 99.95th=[18220], 00:30:32.854 | 99.99th=[20317] 00:30:32.854 bw ( KiB/s): min=20808, max=23136, per=99.86%, avg=22292.00, stdev=1027.63, samples=4 00:30:32.854 iops : min= 5202, max= 5784, avg=5573.00, stdev=256.91, samples=4 00:30:32.854 write: IOPS=5550, BW=21.7MiB/s (22.7MB/s)(43.6MiB/2009msec); 0 zone resets 00:30:32.854 slat (usec): min=2, max=270, avg= 2.81, stdev= 2.81 00:30:32.854 clat (usec): min=2418, max=18851, avg=10868.24, stdev=1187.30 00:30:32.854 lat (usec): min=2431, max=18855, avg=10871.05, stdev=1187.36 00:30:32.854 clat percentiles (usec): 00:30:32.854 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:30:32.854 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:30:32.854 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12387], 95.00th=[12911], 00:30:32.854 | 99.00th=[13960], 99.50th=[14877], 99.90th=[17695], 99.95th=[17957], 00:30:32.854 | 99.99th=[18744] 00:30:32.854 bw ( KiB/s): min=21768, max=22464, per=99.90%, avg=22178.00, stdev=335.45, samples=4 00:30:32.854 iops : min= 5442, max= 5616, avg=5544.50, stdev=83.86, samples=4 00:30:32.854 lat (msec) : 4=0.06%, 10=12.67%, 20=87.25%, 50=0.02% 00:30:32.854 cpu : usr=74.95%, sys=19.62%, ctx=6, majf=0, minf=29 00:30:32.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:32.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:32.854 issued rwts: total=11212,11150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:32.854 00:30:32.854 Run status group 0 (all jobs): 00:30:32.854 READ: bw=21.8MiB/s (22.9MB/s), 21.8MiB/s-21.8MiB/s (22.9MB/s-22.9MB/s), io=43.8MiB (45.9MB), run=2009-2009msec 00:30:32.854 WRITE: bw=21.7MiB/s (22.7MB/s), 21.7MiB/s-21.7MiB/s (22.7MB/s-22.7MB/s), io=43.6MiB (45.7MB), run=2009-2009msec 00:30:32.854 16:07:35 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:32.854 16:07:35 -- host/fio.sh@74 -- # sync 00:30:33.136 16:07:35 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:33.393 16:07:36 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:33.651 16:07:36 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:33.908 16:07:36 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:34.167 16:07:36 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:34.425 16:07:37 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:34.425 16:07:37 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:34.425 16:07:37 -- host/fio.sh@86 -- # nvmftestfini 00:30:34.425 16:07:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:34.425 16:07:37 -- nvmf/common.sh@116 -- # sync 00:30:34.425 16:07:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:34.425 16:07:37 -- nvmf/common.sh@119 -- # set +e 00:30:34.425 16:07:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:34.425 16:07:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:34.425 rmmod nvme_tcp 00:30:34.425 rmmod nvme_fabrics 00:30:34.425 rmmod nvme_keyring 00:30:34.425 16:07:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:34.425 16:07:37 -- nvmf/common.sh@123 -- # set -e 00:30:34.425 16:07:37 -- nvmf/common.sh@124 -- # return 0 00:30:34.425 16:07:37 -- nvmf/common.sh@477 -- # '[' -n 69215 ']' 00:30:34.425 16:07:37 -- nvmf/common.sh@478 -- # killprocess 69215 00:30:34.425 16:07:37 -- common/autotest_common.sh@926 -- # '[' -z 69215 ']' 00:30:34.425 16:07:37 -- common/autotest_common.sh@930 -- # kill -0 69215 00:30:34.425 16:07:37 -- common/autotest_common.sh@931 -- # uname 00:30:34.425 16:07:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:34.425 16:07:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69215 00:30:34.425 killing process with pid 69215 00:30:34.425 16:07:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:34.425 16:07:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:34.425 16:07:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69215' 00:30:34.425 16:07:37 -- common/autotest_common.sh@945 -- # kill 69215 00:30:34.425 16:07:37 -- common/autotest_common.sh@950 -- # wait 69215 00:30:34.683 16:07:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:34.683 16:07:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:34.683 16:07:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:34.683 16:07:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:34.683 16:07:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:34.683 16:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.683 16:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.683 16:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.683 16:07:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:30:34.683 ************************************ 00:30:34.683 END TEST nvmf_fio_host 00:30:34.683 ************************************ 00:30:34.683 00:30:34.683 real 0m19.413s 00:30:34.683 user 1m26.060s 00:30:34.683 sys 0m4.356s 00:30:34.683 16:07:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:34.683 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:30:34.683 16:07:37 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:34.683 16:07:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:34.683 16:07:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:34.683 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:30:34.683 ************************************ 00:30:34.683 START TEST nvmf_failover 00:30:34.683 ************************************ 00:30:34.683 16:07:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:34.942 * Looking for test storage... 00:30:34.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:34.942 16:07:37 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:34.942 16:07:37 -- nvmf/common.sh@7 -- # uname -s 00:30:34.942 16:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.942 16:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.942 16:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.942 16:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.942 16:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.942 16:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.942 16:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.942 16:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.942 16:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.942 16:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.942 16:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:30:34.942 16:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:30:34.942 16:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.942 16:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.942 16:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:34.942 16:07:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:34.942 16:07:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.942 16:07:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.942 16:07:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.942 16:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.942 16:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.942 16:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.942 16:07:37 -- paths/export.sh@5 -- # export PATH 00:30:34.942 16:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.942 16:07:37 -- nvmf/common.sh@46 -- # : 0 00:30:34.942 16:07:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:34.942 16:07:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:34.942 16:07:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:34.942 16:07:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.942 16:07:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.942 16:07:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:34.942 16:07:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:34.942 16:07:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:34.942 16:07:37 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:34.942 16:07:37 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:34.942 16:07:37 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:34.942 16:07:37 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:34.942 16:07:37 -- host/failover.sh@18 -- # nvmftestinit 00:30:34.942 16:07:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:34.942 16:07:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.942 16:07:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:34.942 16:07:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:34.942 16:07:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:34.942 16:07:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.942 16:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.942 16:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.942 16:07:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:30:34.942 16:07:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:30:34.942 16:07:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:30:34.943 16:07:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:30:34.943 16:07:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:30:34.943 16:07:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:30:34.943 16:07:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.943 16:07:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.943 16:07:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:34.943 16:07:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:30:34.943 16:07:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:34.943 16:07:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:34.943 16:07:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:34.943 16:07:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.943 16:07:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:34.943 16:07:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:34.943 16:07:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:34.943 16:07:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:34.943 16:07:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:30:34.943 16:07:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:30:34.943 Cannot find device "nvmf_tgt_br" 00:30:34.943 16:07:37 -- nvmf/common.sh@154 -- # true 00:30:34.943 16:07:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:30:34.943 Cannot find device "nvmf_tgt_br2" 00:30:34.943 16:07:37 -- nvmf/common.sh@155 -- # true 00:30:34.943 16:07:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:30:34.943 16:07:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:30:34.943 Cannot find device "nvmf_tgt_br" 00:30:34.943 16:07:37 -- nvmf/common.sh@157 -- # true 00:30:34.943 16:07:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:30:34.943 Cannot find device "nvmf_tgt_br2" 00:30:34.943 16:07:37 -- nvmf/common.sh@158 -- # true 00:30:34.943 16:07:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:30:34.943 16:07:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:30:34.943 16:07:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:34.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:34.943 16:07:37 -- nvmf/common.sh@161 -- # true 00:30:34.943 16:07:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:34.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:34.943 16:07:37 -- nvmf/common.sh@162 -- # true 00:30:34.943 16:07:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:30:34.943 16:07:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:34.943 16:07:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:34.943 16:07:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:34.943 16:07:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:35.201 16:07:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:35.201 16:07:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:35.201 16:07:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:35.201 16:07:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:35.201 16:07:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:30:35.201 16:07:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:30:35.201 16:07:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:30:35.201 16:07:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:30:35.201 16:07:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:35.201 16:07:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:35.201 16:07:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:35.201 16:07:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:30:35.201 16:07:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:30:35.201 16:07:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:30:35.201 16:07:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:35.201 16:07:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:35.201 16:07:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:35.201 16:07:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:35.201 16:07:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:30:35.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:30:35.201 00:30:35.201 --- 10.0.0.2 ping statistics --- 00:30:35.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.201 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:30:35.201 16:07:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:30:35.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:35.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:30:35.201 00:30:35.201 --- 10.0.0.3 ping statistics --- 00:30:35.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.201 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:30:35.201 16:07:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:35.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:30:35.201 00:30:35.201 --- 10.0.0.1 ping statistics --- 00:30:35.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.201 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:30:35.201 16:07:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.201 16:07:37 -- nvmf/common.sh@421 -- # return 0 00:30:35.201 16:07:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:35.201 16:07:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.201 16:07:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:35.201 16:07:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:35.201 16:07:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.201 16:07:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:35.201 16:07:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:35.201 16:07:37 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:35.201 16:07:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:35.201 16:07:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:35.201 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:30:35.201 16:07:37 -- nvmf/common.sh@469 -- # nvmfpid=69765 00:30:35.201 16:07:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:35.201 16:07:37 -- nvmf/common.sh@470 -- # waitforlisten 69765 00:30:35.201 16:07:37 -- common/autotest_common.sh@819 -- # '[' -z 69765 ']' 00:30:35.201 16:07:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.201 16:07:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:35.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.201 16:07:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.201 16:07:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:35.201 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:30:35.201 [2024-07-22 16:07:38.050042] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:35.201 [2024-07-22 16:07:38.050144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.459 [2024-07-22 16:07:38.198382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:35.459 [2024-07-22 16:07:38.270931] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:35.459 [2024-07-22 16:07:38.271316] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.459 [2024-07-22 16:07:38.271527] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.459 [2024-07-22 16:07:38.271705] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.459 [2024-07-22 16:07:38.272070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.459 [2024-07-22 16:07:38.272145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.459 [2024-07-22 16:07:38.272154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.393 16:07:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:36.393 16:07:39 -- common/autotest_common.sh@852 -- # return 0 00:30:36.393 16:07:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:36.393 16:07:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:36.393 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:30:36.393 16:07:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.393 16:07:39 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:36.651 [2024-07-22 16:07:39.309310] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.651 16:07:39 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:36.909 Malloc0 00:30:36.909 16:07:39 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:37.168 16:07:39 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:37.427 16:07:40 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.684 [2024-07-22 16:07:40.434379] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.684 16:07:40 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:37.942 [2024-07-22 16:07:40.714681] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:37.942 16:07:40 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:38.199 [2024-07-22 16:07:40.982963] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:38.199 16:07:41 -- host/failover.sh@31 -- # bdevperf_pid=69827 00:30:38.200 16:07:41 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:38.200 16:07:41 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:38.200 16:07:41 -- host/failover.sh@34 -- # waitforlisten 69827 /var/tmp/bdevperf.sock 00:30:38.200 16:07:41 -- common/autotest_common.sh@819 -- # '[' -z 69827 ']' 00:30:38.200 16:07:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:38.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:38.200 16:07:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:38.200 16:07:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:38.200 16:07:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:38.200 16:07:41 -- common/autotest_common.sh@10 -- # set +x 00:30:38.766 16:07:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:38.766 16:07:41 -- common/autotest_common.sh@852 -- # return 0 00:30:38.766 16:07:41 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.024 NVMe0n1 00:30:39.024 16:07:41 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.282 00:30:39.282 16:07:42 -- host/failover.sh@39 -- # run_test_pid=69839 00:30:39.282 16:07:42 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:39.282 16:07:42 -- host/failover.sh@41 -- # sleep 1 00:30:40.217 16:07:43 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.474 [2024-07-22 16:07:43.283376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 [2024-07-22 16:07:43.283660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e284a0 is same with the state(5) to be set 00:30:40.474 16:07:43 -- host/failover.sh@45 -- # sleep 3 00:30:43.754 16:07:46 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.012 00:30:44.012 16:07:46 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:44.270 [2024-07-22 16:07:46.914178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.270 [2024-07-22 16:07:46.914220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.270 [2024-07-22 16:07:46.914231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.270 [2024-07-22 16:07:46.914241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.270 [2024-07-22 16:07:46.914250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.270 [2024-07-22 16:07:46.914258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.270 [2024-07-22 16:07:46.914267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.270 [2024-07-22 16:07:46.914276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.270 [2024-07-22 16:07:46.914284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.270 [2024-07-22 16:07:46.914293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.271 [2024-07-22 16:07:46.914301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.271 [2024-07-22 16:07:46.914310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.271 [2024-07-22 16:07:46.914319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.271 [2024-07-22 16:07:46.914327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e28b80 is same with the state(5) to be set 00:30:44.271 16:07:46 -- host/failover.sh@50 -- # sleep 3 00:30:47.554 16:07:49 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.554 [2024-07-22 16:07:50.232336] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.554 16:07:50 -- host/failover.sh@55 -- # sleep 1 00:30:48.489 16:07:51 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:48.748 [2024-07-22 16:07:51.486035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 [2024-07-22 16:07:51.486386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e27190 is same with the state(5) to be set 00:30:48.748 16:07:51 -- host/failover.sh@59 -- # wait 69839 00:30:55.317 0 00:30:55.317 16:07:57 -- host/failover.sh@61 -- # killprocess 69827 00:30:55.317 16:07:57 -- common/autotest_common.sh@926 -- # '[' -z 69827 ']' 00:30:55.317 16:07:57 -- common/autotest_common.sh@930 -- # kill -0 69827 00:30:55.317 16:07:57 -- common/autotest_common.sh@931 -- # uname 00:30:55.317 16:07:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:55.317 16:07:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69827 00:30:55.317 16:07:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:55.317 16:07:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:55.317 16:07:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69827' 00:30:55.317 killing process with pid 69827 00:30:55.317 16:07:57 -- common/autotest_common.sh@945 -- # kill 69827 00:30:55.317 16:07:57 -- common/autotest_common.sh@950 -- # wait 69827 00:30:55.317 16:07:57 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:55.317 [2024-07-22 16:07:41.042393] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:55.317 [2024-07-22 16:07:41.042551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69827 ] 00:30:55.317 [2024-07-22 16:07:41.175580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.317 [2024-07-22 16:07:41.240072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.317 Running I/O for 15 seconds... 00:30:55.317 [2024-07-22 16:07:43.283737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.317 [2024-07-22 16:07:43.283793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.317 [2024-07-22 16:07:43.283822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.317 [2024-07-22 16:07:43.283839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.317 [2024-07-22 16:07:43.283855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.317 [2024-07-22 16:07:43.283869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.317 [2024-07-22 16:07:43.283884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.317 [2024-07-22 16:07:43.283898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.317 [2024-07-22 16:07:43.283915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.317 [2024-07-22 16:07:43.283928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.317 [2024-07-22 16:07:43.283944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.317 [2024-07-22 16:07:43.283958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.317 [2024-07-22 16:07:43.283973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.317 [2024-07-22 16:07:43.283987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.317 [2024-07-22 16:07:43.284003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.317 [2024-07-22 16:07:43.284016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.318 [2024-07-22 16:07:43.284600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.318 [2024-07-22 16:07:43.284719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.318 [2024-07-22 16:07:43.284777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.318 [2024-07-22 16:07:43.284806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.318 [2024-07-22 16:07:43.284906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.318 [2024-07-22 16:07:43.284939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.284976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.284992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.285006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.318 [2024-07-22 16:07:43.285036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.285065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.285095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.285124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.285153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.285182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.285211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.285240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.318 [2024-07-22 16:07:43.285271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.318 [2024-07-22 16:07:43.285287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.285335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.285365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.285753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.285784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.285813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.285871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.285900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.285942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.285972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.285987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.286001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.286031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.286212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.286243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.286272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.286331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.286421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.319 [2024-07-22 16:07:43.286452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.319 [2024-07-22 16:07:43.286519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.319 [2024-07-22 16:07:43.286541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.320 [2024-07-22 16:07:43.286777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.320 [2024-07-22 16:07:43.286836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.320 [2024-07-22 16:07:43.286951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.286967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.286981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.320 [2024-07-22 16:07:43.287046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.320 [2024-07-22 16:07:43.287075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.320 [2024-07-22 16:07:43.287133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.320 [2024-07-22 16:07:43.287163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.320 [2024-07-22 16:07:43.287221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.320 [2024-07-22 16:07:43.287504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.320 [2024-07-22 16:07:43.287772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.320 [2024-07-22 16:07:43.287785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:43.287800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb08790 is same with the state(5) to be set 00:30:55.321 [2024-07-22 16:07:43.287819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.321 [2024-07-22 16:07:43.287830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.321 [2024-07-22 16:07:43.287842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112880 len:8 PRP1 0x0 PRP2 0x0 00:30:55.321 [2024-07-22 16:07:43.287855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:43.287911] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb08790 was disconnected and freed. reset controller. 00:30:55.321 [2024-07-22 16:07:43.287942] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:55.321 [2024-07-22 16:07:43.288008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.321 [2024-07-22 16:07:43.288030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:43.288048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.321 [2024-07-22 16:07:43.288062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:43.288076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.321 [2024-07-22 16:07:43.288090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:43.288105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.321 [2024-07-22 16:07:43.288118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:43.288131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.321 [2024-07-22 16:07:43.290665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.321 [2024-07-22 16:07:43.290710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93160 (9): Bad file descriptor 00:30:55.321 [2024-07-22 16:07:43.321704] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:55.321 [2024-07-22 16:07:46.913783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.321 [2024-07-22 16:07:46.913851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.913892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.321 [2024-07-22 16:07:46.913909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.913923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.321 [2024-07-22 16:07:46.913937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.913951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.321 [2024-07-22 16:07:46.913964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.913978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93160 is same with the state(5) to be set 00:30:55.321 [2024-07-22 16:07:46.914388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.321 [2024-07-22 16:07:46.914915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.914979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.914995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.321 [2024-07-22 16:07:46.915009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.915024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.321 [2024-07-22 16:07:46.915038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.915053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.915066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.915082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.915096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.915119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.915134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.915149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.915163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.915178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.915192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.915207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.321 [2024-07-22 16:07:46.915221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.321 [2024-07-22 16:07:46.915237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.322 [2024-07-22 16:07:46.915339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.322 [2024-07-22 16:07:46.915398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.322 [2024-07-22 16:07:46.915509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.322 [2024-07-22 16:07:46.915659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.322 [2024-07-22 16:07:46.915688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.322 [2024-07-22 16:07:46.915717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.322 [2024-07-22 16:07:46.915746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.915973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.915987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.916002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.916016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.916032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.916046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.916061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.916075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.916090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.916104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.916120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.916134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.916149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.322 [2024-07-22 16:07:46.916163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.322 [2024-07-22 16:07:46.916178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.322 [2024-07-22 16:07:46.916192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.916221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.916250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.916404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.916840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.916898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.916927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.916957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.916972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.916989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.917019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.917054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.917083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.917112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.917141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.917170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.917199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.917229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.917258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.917286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.917316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.323 [2024-07-22 16:07:46.917345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.323 [2024-07-22 16:07:46.917360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.323 [2024-07-22 16:07:46.917374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.324 [2024-07-22 16:07:46.917660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.324 [2024-07-22 16:07:46.917689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.324 [2024-07-22 16:07:46.917871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.324 [2024-07-22 16:07:46.917901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.917929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.324 [2024-07-22 16:07:46.917961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.917976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.324 [2024-07-22 16:07:46.917990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.324 [2024-07-22 16:07:46.918019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.918048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.918077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.918106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.918135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.918164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.918199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.918228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.918257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.324 [2024-07-22 16:07:46.918286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa968c0 is same with the state(5) to be set 00:30:55.324 [2024-07-22 16:07:46.918317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.324 [2024-07-22 16:07:46.918328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.324 [2024-07-22 16:07:46.918339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90720 len:8 PRP1 0x0 PRP2 0x0 00:30:55.324 [2024-07-22 16:07:46.918352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:46.918399] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa968c0 was disconnected and freed. reset controller. 00:30:55.324 [2024-07-22 16:07:46.918416] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:55.324 [2024-07-22 16:07:46.918431] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.324 [2024-07-22 16:07:46.920859] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.324 [2024-07-22 16:07:46.920899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93160 (9): Bad file descriptor 00:30:55.324 [2024-07-22 16:07:46.950246] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:55.324 [2024-07-22 16:07:51.484748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.324 [2024-07-22 16:07:51.484886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.324 [2024-07-22 16:07:51.484909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.324 [2024-07-22 16:07:51.484923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.484938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.325 [2024-07-22 16:07:51.484951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.484965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.325 [2024-07-22 16:07:51.484979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.485013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa93160 is same with the state(5) to be set 00:30:55.325 [2024-07-22 16:07:51.486455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.486974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.486988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.325 [2024-07-22 16:07:51.487018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.325 [2024-07-22 16:07:51.487106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.325 [2024-07-22 16:07:51.487136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.325 [2024-07-22 16:07:51.487406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.325 [2024-07-22 16:07:51.487436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.325 [2024-07-22 16:07:51.487465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.325 [2024-07-22 16:07:51.487571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.325 [2024-07-22 16:07:51.487586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.326 [2024-07-22 16:07:51.487668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.326 [2024-07-22 16:07:51.487697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.326 [2024-07-22 16:07:51.487814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.326 [2024-07-22 16:07:51.487843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.487975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.487989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.326 [2024-07-22 16:07:51.488231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.326 [2024-07-22 16:07:51.488289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.326 [2024-07-22 16:07:51.488376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.326 [2024-07-22 16:07:51.488413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.326 [2024-07-22 16:07:51.488472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.326 [2024-07-22 16:07:51.488498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.326 [2024-07-22 16:07:51.488515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.488544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.488574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.488602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.488632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.488661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.488690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.488718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.488747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.488777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.488813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.488842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.488872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.488901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.488931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.488960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.488976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.488990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.327 [2024-07-22 16:07:51.489408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.327 [2024-07-22 16:07:51.489635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.327 [2024-07-22 16:07:51.489651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.489665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.328 [2024-07-22 16:07:51.489695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.489724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.489752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.489781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.489811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.328 [2024-07-22 16:07:51.489839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.328 [2024-07-22 16:07:51.489869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.328 [2024-07-22 16:07:51.489898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.489927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.328 [2024-07-22 16:07:51.489967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.489983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.328 [2024-07-22 16:07:51.489996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.490028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.328 [2024-07-22 16:07:51.490057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.328 [2024-07-22 16:07:51.490086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.490116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.328 [2024-07-22 16:07:51.490145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.490174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.490203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.490231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.490260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.490289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.490324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.328 [2024-07-22 16:07:51.490354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca62f0 is same with the state(5) to be set 00:30:55.328 [2024-07-22 16:07:51.490386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.328 [2024-07-22 16:07:51.490397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.328 [2024-07-22 16:07:51.490408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34496 len:8 PRP1 0x0 PRP2 0x0 00:30:55.328 [2024-07-22 16:07:51.490421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.328 [2024-07-22 16:07:51.490472] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xca62f0 was disconnected and freed. reset controller. 00:30:55.328 [2024-07-22 16:07:51.490501] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:55.328 [2024-07-22 16:07:51.490518] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.328 [2024-07-22 16:07:51.493074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.328 [2024-07-22 16:07:51.493116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa93160 (9): Bad file descriptor 00:30:55.328 [2024-07-22 16:07:51.526444] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:55.328 00:30:55.328 Latency(us) 00:30:55.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.328 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.328 Verification LBA range: start 0x0 length 0x4000 00:30:55.328 NVMe0n1 : 15.01 12503.52 48.84 304.19 0.00 9974.46 498.97 17515.99 00:30:55.328 =================================================================================================================== 00:30:55.328 Total : 12503.52 48.84 304.19 0.00 9974.46 498.97 17515.99 00:30:55.328 Received shutdown signal, test time was about 15.000000 seconds 00:30:55.328 00:30:55.328 Latency(us) 00:30:55.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.328 =================================================================================================================== 00:30:55.328 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.328 16:07:57 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:55.328 16:07:57 -- host/failover.sh@65 -- # count=3 00:30:55.328 16:07:57 -- host/failover.sh@67 -- # (( count != 3 )) 00:30:55.328 16:07:57 -- host/failover.sh@73 -- # bdevperf_pid=70015 00:30:55.328 16:07:57 -- host/failover.sh@75 -- # waitforlisten 70015 /var/tmp/bdevperf.sock 00:30:55.328 16:07:57 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:55.328 16:07:57 -- common/autotest_common.sh@819 -- # '[' -z 70015 ']' 00:30:55.328 16:07:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:55.328 16:07:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:55.328 16:07:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:55.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:55.328 16:07:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:55.328 16:07:57 -- common/autotest_common.sh@10 -- # set +x 00:30:55.587 16:07:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:55.587 16:07:58 -- common/autotest_common.sh@852 -- # return 0 00:30:55.587 16:07:58 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:55.846 [2024-07-22 16:07:58.691127] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:56.104 16:07:58 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:56.104 [2024-07-22 16:07:58.955401] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:56.362 16:07:58 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.626 NVMe0n1 00:30:56.626 16:07:59 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.885 00:30:56.885 16:07:59 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.144 00:30:57.144 16:07:59 -- host/failover.sh@82 -- # grep -q NVMe0 00:30:57.144 16:07:59 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:57.402 16:08:00 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.660 16:08:00 -- host/failover.sh@87 -- # sleep 3 00:31:00.961 16:08:03 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.961 16:08:03 -- host/failover.sh@88 -- # grep -q NVMe0 00:31:00.961 16:08:03 -- host/failover.sh@90 -- # run_test_pid=70093 00:31:00.961 16:08:03 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:00.961 16:08:03 -- host/failover.sh@92 -- # wait 70093 00:31:02.337 0 00:31:02.337 16:08:04 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:02.337 [2024-07-22 16:07:57.442418] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:02.337 [2024-07-22 16:07:57.442671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70015 ] 00:31:02.337 [2024-07-22 16:07:57.590571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.337 [2024-07-22 16:07:57.658813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.337 [2024-07-22 16:08:00.462816] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:02.337 [2024-07-22 16:08:00.462952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.337 [2024-07-22 16:08:00.462978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.337 [2024-07-22 16:08:00.462997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.337 [2024-07-22 16:08:00.463011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.338 [2024-07-22 16:08:00.463026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.338 [2024-07-22 16:08:00.463040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.338 [2024-07-22 16:08:00.463054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:02.338 [2024-07-22 16:08:00.463068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:02.338 [2024-07-22 16:08:00.463082] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:02.338 [2024-07-22 16:08:00.463145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:02.338 [2024-07-22 16:08:00.463178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11df160 (9): Bad file descriptor 00:31:02.338 [2024-07-22 16:08:00.466693] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:02.338 Running I/O for 1 seconds... 00:31:02.338 00:31:02.338 Latency(us) 00:31:02.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.338 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:02.338 Verification LBA range: start 0x0 length 0x4000 00:31:02.338 NVMe0n1 : 1.01 12775.82 49.91 0.00 0.00 9960.57 1482.01 11319.85 00:31:02.338 =================================================================================================================== 00:31:02.338 Total : 12775.82 49.91 0.00 0.00 9960.57 1482.01 11319.85 00:31:02.338 16:08:04 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.338 16:08:04 -- host/failover.sh@95 -- # grep -q NVMe0 00:31:02.338 16:08:05 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.596 16:08:05 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.596 16:08:05 -- host/failover.sh@99 -- # grep -q NVMe0 00:31:02.867 16:08:05 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.447 16:08:06 -- host/failover.sh@101 -- # sleep 3 00:31:06.732 16:08:09 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:06.732 16:08:09 -- host/failover.sh@103 -- # grep -q NVMe0 00:31:06.732 16:08:09 -- host/failover.sh@108 -- # killprocess 70015 00:31:06.732 16:08:09 -- common/autotest_common.sh@926 -- # '[' -z 70015 ']' 00:31:06.732 16:08:09 -- common/autotest_common.sh@930 -- # kill -0 70015 00:31:06.732 16:08:09 -- common/autotest_common.sh@931 -- # uname 00:31:06.732 16:08:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:06.732 16:08:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70015 00:31:06.732 killing process with pid 70015 00:31:06.732 16:08:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:06.732 16:08:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:06.732 16:08:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70015' 00:31:06.732 16:08:09 -- common/autotest_common.sh@945 -- # kill 70015 00:31:06.732 16:08:09 -- common/autotest_common.sh@950 -- # wait 70015 00:31:06.732 16:08:09 -- host/failover.sh@110 -- # sync 00:31:06.732 16:08:09 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.298 16:08:09 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:07.298 16:08:09 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:07.298 16:08:09 -- host/failover.sh@116 -- # nvmftestfini 00:31:07.298 16:08:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:07.298 16:08:09 -- nvmf/common.sh@116 -- # sync 00:31:07.298 16:08:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:07.298 16:08:09 -- nvmf/common.sh@119 -- # set +e 00:31:07.298 16:08:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:07.298 16:08:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:07.298 rmmod nvme_tcp 00:31:07.298 rmmod nvme_fabrics 00:31:07.298 rmmod nvme_keyring 00:31:07.298 16:08:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:07.298 16:08:09 -- nvmf/common.sh@123 -- # set -e 00:31:07.298 16:08:09 -- nvmf/common.sh@124 -- # return 0 00:31:07.298 16:08:09 -- nvmf/common.sh@477 -- # '[' -n 69765 ']' 00:31:07.298 16:08:09 -- nvmf/common.sh@478 -- # killprocess 69765 00:31:07.298 16:08:09 -- common/autotest_common.sh@926 -- # '[' -z 69765 ']' 00:31:07.298 16:08:09 -- common/autotest_common.sh@930 -- # kill -0 69765 00:31:07.298 16:08:09 -- common/autotest_common.sh@931 -- # uname 00:31:07.298 16:08:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:07.298 16:08:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69765 00:31:07.298 16:08:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:07.298 16:08:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:07.299 16:08:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69765' 00:31:07.299 killing process with pid 69765 00:31:07.299 16:08:09 -- common/autotest_common.sh@945 -- # kill 69765 00:31:07.299 16:08:09 -- common/autotest_common.sh@950 -- # wait 69765 00:31:07.299 16:08:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:07.299 16:08:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:07.299 16:08:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:07.299 16:08:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:07.299 16:08:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:07.299 16:08:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.299 16:08:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:07.299 16:08:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.558 16:08:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:31:07.558 00:31:07.558 real 0m32.630s 00:31:07.558 user 2m7.013s 00:31:07.558 sys 0m5.368s 00:31:07.558 16:08:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.558 16:08:10 -- common/autotest_common.sh@10 -- # set +x 00:31:07.558 ************************************ 00:31:07.558 END TEST nvmf_failover 00:31:07.558 ************************************ 00:31:07.558 16:08:10 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:07.558 16:08:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:07.558 16:08:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:07.558 16:08:10 -- common/autotest_common.sh@10 -- # set +x 00:31:07.558 ************************************ 00:31:07.558 START TEST nvmf_discovery 00:31:07.558 ************************************ 00:31:07.558 16:08:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:07.558 * Looking for test storage... 00:31:07.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:07.558 16:08:10 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:07.558 16:08:10 -- nvmf/common.sh@7 -- # uname -s 00:31:07.558 16:08:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.558 16:08:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.558 16:08:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.558 16:08:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.558 16:08:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.558 16:08:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.558 16:08:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.558 16:08:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.558 16:08:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.558 16:08:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.558 16:08:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:31:07.558 16:08:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:31:07.558 16:08:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.558 16:08:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.558 16:08:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:07.558 16:08:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:07.558 16:08:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.558 16:08:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.558 16:08:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.558 16:08:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.558 16:08:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.558 16:08:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.558 16:08:10 -- paths/export.sh@5 -- # export PATH 00:31:07.558 16:08:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.558 16:08:10 -- nvmf/common.sh@46 -- # : 0 00:31:07.558 16:08:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:07.558 16:08:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:07.558 16:08:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:07.558 16:08:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.558 16:08:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.558 16:08:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:07.558 16:08:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:07.558 16:08:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:07.558 16:08:10 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:07.558 16:08:10 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:07.558 16:08:10 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:07.558 16:08:10 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:07.558 16:08:10 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:07.558 16:08:10 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:07.558 16:08:10 -- host/discovery.sh@25 -- # nvmftestinit 00:31:07.558 16:08:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:07.558 16:08:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.558 16:08:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:07.558 16:08:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:07.558 16:08:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:07.559 16:08:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.559 16:08:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:07.559 16:08:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.559 16:08:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:31:07.559 16:08:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:31:07.559 16:08:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:31:07.559 16:08:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:31:07.559 16:08:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:31:07.559 16:08:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:31:07.559 16:08:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.559 16:08:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.559 16:08:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:07.559 16:08:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:31:07.559 16:08:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:07.559 16:08:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:07.559 16:08:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:07.559 16:08:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.559 16:08:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:07.559 16:08:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:07.559 16:08:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:07.559 16:08:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:07.559 16:08:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:31:07.559 16:08:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:31:07.559 Cannot find device "nvmf_tgt_br" 00:31:07.559 16:08:10 -- nvmf/common.sh@154 -- # true 00:31:07.559 16:08:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:31:07.559 Cannot find device "nvmf_tgt_br2" 00:31:07.559 16:08:10 -- nvmf/common.sh@155 -- # true 00:31:07.559 16:08:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:31:07.559 16:08:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:31:07.559 Cannot find device "nvmf_tgt_br" 00:31:07.559 16:08:10 -- nvmf/common.sh@157 -- # true 00:31:07.559 16:08:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:31:07.559 Cannot find device "nvmf_tgt_br2" 00:31:07.559 16:08:10 -- nvmf/common.sh@158 -- # true 00:31:07.559 16:08:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:31:07.823 16:08:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:31:07.823 16:08:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:07.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:07.823 16:08:10 -- nvmf/common.sh@161 -- # true 00:31:07.823 16:08:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:07.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:07.823 16:08:10 -- nvmf/common.sh@162 -- # true 00:31:07.823 16:08:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:31:07.823 16:08:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:07.823 16:08:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:07.823 16:08:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:07.823 16:08:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:07.823 16:08:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:07.823 16:08:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:07.823 16:08:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:07.823 16:08:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:07.823 16:08:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:31:07.823 16:08:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:31:07.823 16:08:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:31:07.823 16:08:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:31:07.823 16:08:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:07.823 16:08:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:07.823 16:08:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:07.823 16:08:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:31:07.823 16:08:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:31:07.823 16:08:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:31:07.823 16:08:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:07.823 16:08:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:07.823 16:08:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:07.823 16:08:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:07.823 16:08:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:31:07.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:31:07.823 00:31:07.823 --- 10.0.0.2 ping statistics --- 00:31:07.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.823 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:31:07.823 16:08:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:31:07.823 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:07.823 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:31:07.823 00:31:07.823 --- 10.0.0.3 ping statistics --- 00:31:07.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.823 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:31:07.823 16:08:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:07.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:31:07.823 00:31:07.823 --- 10.0.0.1 ping statistics --- 00:31:07.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.823 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:31:07.823 16:08:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.823 16:08:10 -- nvmf/common.sh@421 -- # return 0 00:31:07.823 16:08:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:07.823 16:08:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.823 16:08:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:07.823 16:08:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:07.823 16:08:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.823 16:08:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:07.823 16:08:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:07.823 16:08:10 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:07.823 16:08:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:07.823 16:08:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:07.823 16:08:10 -- common/autotest_common.sh@10 -- # set +x 00:31:07.823 16:08:10 -- nvmf/common.sh@469 -- # nvmfpid=70370 00:31:07.823 16:08:10 -- nvmf/common.sh@470 -- # waitforlisten 70370 00:31:07.823 16:08:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:07.823 16:08:10 -- common/autotest_common.sh@819 -- # '[' -z 70370 ']' 00:31:07.823 16:08:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.823 16:08:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:07.823 16:08:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.823 16:08:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:07.823 16:08:10 -- common/autotest_common.sh@10 -- # set +x 00:31:08.112 [2024-07-22 16:08:10.690849] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:08.112 [2024-07-22 16:08:10.690958] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.112 [2024-07-22 16:08:10.831793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.112 [2024-07-22 16:08:10.900638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:08.112 [2024-07-22 16:08:10.900832] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.112 [2024-07-22 16:08:10.900848] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.112 [2024-07-22 16:08:10.900859] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.112 [2024-07-22 16:08:10.900902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.048 16:08:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:09.048 16:08:11 -- common/autotest_common.sh@852 -- # return 0 00:31:09.048 16:08:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:09.048 16:08:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:09.048 16:08:11 -- common/autotest_common.sh@10 -- # set +x 00:31:09.048 16:08:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.048 16:08:11 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:09.048 16:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.048 16:08:11 -- common/autotest_common.sh@10 -- # set +x 00:31:09.048 [2024-07-22 16:08:11.687908] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.048 16:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.048 16:08:11 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:09.048 16:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.048 16:08:11 -- common/autotest_common.sh@10 -- # set +x 00:31:09.048 [2024-07-22 16:08:11.696032] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:09.048 16:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.048 16:08:11 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:09.048 16:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.048 16:08:11 -- common/autotest_common.sh@10 -- # set +x 00:31:09.048 null0 00:31:09.048 16:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.048 16:08:11 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:09.048 16:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.048 16:08:11 -- common/autotest_common.sh@10 -- # set +x 00:31:09.048 null1 00:31:09.048 16:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.048 16:08:11 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:09.048 16:08:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.048 16:08:11 -- common/autotest_common.sh@10 -- # set +x 00:31:09.048 16:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.048 16:08:11 -- host/discovery.sh@45 -- # hostpid=70402 00:31:09.048 16:08:11 -- host/discovery.sh@46 -- # waitforlisten 70402 /tmp/host.sock 00:31:09.048 16:08:11 -- common/autotest_common.sh@819 -- # '[' -z 70402 ']' 00:31:09.048 16:08:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:09.048 16:08:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:09.048 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:09.048 16:08:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:09.048 16:08:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:09.048 16:08:11 -- common/autotest_common.sh@10 -- # set +x 00:31:09.048 16:08:11 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:09.048 [2024-07-22 16:08:11.801997] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:09.048 [2024-07-22 16:08:11.802141] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70402 ] 00:31:09.306 [2024-07-22 16:08:11.952323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.306 [2024-07-22 16:08:12.020196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:09.306 [2024-07-22 16:08:12.020395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.871 16:08:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:09.871 16:08:12 -- common/autotest_common.sh@852 -- # return 0 00:31:09.871 16:08:12 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:09.871 16:08:12 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:09.871 16:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.871 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:09.871 16:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.871 16:08:12 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:09.871 16:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.871 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:09.871 16:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.871 16:08:12 -- host/discovery.sh@72 -- # notify_id=0 00:31:09.871 16:08:12 -- host/discovery.sh@78 -- # get_subsystem_names 00:31:09.871 16:08:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:09.871 16:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.871 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:09.871 16:08:12 -- host/discovery.sh@59 -- # sort 00:31:09.871 16:08:12 -- host/discovery.sh@59 -- # xargs 00:31:09.871 16:08:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:09.871 16:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.129 16:08:12 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:31:10.129 16:08:12 -- host/discovery.sh@79 -- # get_bdev_list 00:31:10.129 16:08:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.129 16:08:12 -- host/discovery.sh@55 -- # sort 00:31:10.129 16:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.129 16:08:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.129 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:10.129 16:08:12 -- host/discovery.sh@55 -- # xargs 00:31:10.129 16:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.129 16:08:12 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:31:10.129 16:08:12 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.129 16:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.129 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:10.129 16:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.129 16:08:12 -- host/discovery.sh@82 -- # get_subsystem_names 00:31:10.129 16:08:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.129 16:08:12 -- host/discovery.sh@59 -- # sort 00:31:10.129 16:08:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.129 16:08:12 -- host/discovery.sh@59 -- # xargs 00:31:10.129 16:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.129 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:10.129 16:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.129 16:08:12 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:31:10.129 16:08:12 -- host/discovery.sh@83 -- # get_bdev_list 00:31:10.129 16:08:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.129 16:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.129 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:10.130 16:08:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.130 16:08:12 -- host/discovery.sh@55 -- # sort 00:31:10.130 16:08:12 -- host/discovery.sh@55 -- # xargs 00:31:10.130 16:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.130 16:08:12 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:10.130 16:08:12 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:10.130 16:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.130 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:10.130 16:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.130 16:08:12 -- host/discovery.sh@86 -- # get_subsystem_names 00:31:10.130 16:08:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.130 16:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.130 16:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:10.130 16:08:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.130 16:08:12 -- host/discovery.sh@59 -- # sort 00:31:10.130 16:08:12 -- host/discovery.sh@59 -- # xargs 00:31:10.130 16:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.387 16:08:13 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:31:10.387 16:08:13 -- host/discovery.sh@87 -- # get_bdev_list 00:31:10.387 16:08:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.387 16:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.387 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:10.387 16:08:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.387 16:08:13 -- host/discovery.sh@55 -- # xargs 00:31:10.387 16:08:13 -- host/discovery.sh@55 -- # sort 00:31:10.387 16:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.387 16:08:13 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:10.387 16:08:13 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.387 16:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.387 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:10.387 [2024-07-22 16:08:13.080392] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.387 16:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.387 16:08:13 -- host/discovery.sh@92 -- # get_subsystem_names 00:31:10.387 16:08:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.387 16:08:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.387 16:08:13 -- host/discovery.sh@59 -- # sort 00:31:10.387 16:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.387 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:10.387 16:08:13 -- host/discovery.sh@59 -- # xargs 00:31:10.387 16:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.387 16:08:13 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:10.387 16:08:13 -- host/discovery.sh@93 -- # get_bdev_list 00:31:10.387 16:08:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.387 16:08:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.387 16:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.387 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:10.387 16:08:13 -- host/discovery.sh@55 -- # xargs 00:31:10.387 16:08:13 -- host/discovery.sh@55 -- # sort 00:31:10.387 16:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.387 16:08:13 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:31:10.387 16:08:13 -- host/discovery.sh@94 -- # get_notification_count 00:31:10.387 16:08:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:10.387 16:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.387 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:10.387 16:08:13 -- host/discovery.sh@74 -- # jq '. | length' 00:31:10.387 16:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.645 16:08:13 -- host/discovery.sh@74 -- # notification_count=0 00:31:10.645 16:08:13 -- host/discovery.sh@75 -- # notify_id=0 00:31:10.645 16:08:13 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:31:10.645 16:08:13 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:10.645 16:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.645 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:10.645 16:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.645 16:08:13 -- host/discovery.sh@100 -- # sleep 1 00:31:10.903 [2024-07-22 16:08:13.711037] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:10.903 [2024-07-22 16:08:13.711105] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:10.903 [2024-07-22 16:08:13.711129] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:10.903 [2024-07-22 16:08:13.717093] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:11.161 [2024-07-22 16:08:13.773282] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:11.161 [2024-07-22 16:08:13.773326] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:11.418 16:08:14 -- host/discovery.sh@101 -- # get_subsystem_names 00:31:11.418 16:08:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.418 16:08:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.418 16:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.418 16:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:11.418 16:08:14 -- host/discovery.sh@59 -- # sort 00:31:11.418 16:08:14 -- host/discovery.sh@59 -- # xargs 00:31:11.676 16:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.676 16:08:14 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.676 16:08:14 -- host/discovery.sh@102 -- # get_bdev_list 00:31:11.676 16:08:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.676 16:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.676 16:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:11.676 16:08:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.676 16:08:14 -- host/discovery.sh@55 -- # sort 00:31:11.676 16:08:14 -- host/discovery.sh@55 -- # xargs 00:31:11.676 16:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.676 16:08:14 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:11.676 16:08:14 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:31:11.676 16:08:14 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:11.676 16:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.676 16:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:11.676 16:08:14 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:11.676 16:08:14 -- host/discovery.sh@63 -- # sort -n 00:31:11.676 16:08:14 -- host/discovery.sh@63 -- # xargs 00:31:11.676 16:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.676 16:08:14 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:31:11.676 16:08:14 -- host/discovery.sh@104 -- # get_notification_count 00:31:11.676 16:08:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:11.676 16:08:14 -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.676 16:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.676 16:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:11.676 16:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.676 16:08:14 -- host/discovery.sh@74 -- # notification_count=1 00:31:11.676 16:08:14 -- host/discovery.sh@75 -- # notify_id=1 00:31:11.676 16:08:14 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:31:11.676 16:08:14 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:11.676 16:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.676 16:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:11.676 16:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.676 16:08:14 -- host/discovery.sh@109 -- # sleep 1 00:31:13.088 16:08:15 -- host/discovery.sh@110 -- # get_bdev_list 00:31:13.088 16:08:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.088 16:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.088 16:08:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.088 16:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.088 16:08:15 -- host/discovery.sh@55 -- # sort 00:31:13.088 16:08:15 -- host/discovery.sh@55 -- # xargs 00:31:13.088 16:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.088 16:08:15 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.088 16:08:15 -- host/discovery.sh@111 -- # get_notification_count 00:31:13.088 16:08:15 -- host/discovery.sh@74 -- # jq '. | length' 00:31:13.088 16:08:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:13.088 16:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.088 16:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.088 16:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.088 16:08:15 -- host/discovery.sh@74 -- # notification_count=1 00:31:13.088 16:08:15 -- host/discovery.sh@75 -- # notify_id=2 00:31:13.088 16:08:15 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:31:13.088 16:08:15 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:13.088 16:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.088 16:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:13.088 [2024-07-22 16:08:15.631195] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:13.088 [2024-07-22 16:08:15.632458] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:13.088 [2024-07-22 16:08:15.632658] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.088 16:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.088 16:08:15 -- host/discovery.sh@117 -- # sleep 1 00:31:13.088 [2024-07-22 16:08:15.638448] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:13.088 [2024-07-22 16:08:15.695765] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:13.088 [2024-07-22 16:08:15.696008] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:13.088 [2024-07-22 16:08:15.696126] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:14.024 16:08:16 -- host/discovery.sh@118 -- # get_subsystem_names 00:31:14.024 16:08:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:14.024 16:08:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:14.024 16:08:16 -- host/discovery.sh@59 -- # sort 00:31:14.024 16:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.024 16:08:16 -- host/discovery.sh@59 -- # xargs 00:31:14.024 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:14.024 16:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.024 16:08:16 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.024 16:08:16 -- host/discovery.sh@119 -- # get_bdev_list 00:31:14.024 16:08:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.024 16:08:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:14.024 16:08:16 -- host/discovery.sh@55 -- # sort 00:31:14.024 16:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.024 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:14.024 16:08:16 -- host/discovery.sh@55 -- # xargs 00:31:14.024 16:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.024 16:08:16 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:14.024 16:08:16 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:31:14.024 16:08:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:14.024 16:08:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:14.024 16:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.024 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:14.024 16:08:16 -- host/discovery.sh@63 -- # xargs 00:31:14.024 16:08:16 -- host/discovery.sh@63 -- # sort -n 00:31:14.024 16:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.024 16:08:16 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:14.024 16:08:16 -- host/discovery.sh@121 -- # get_notification_count 00:31:14.024 16:08:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:14.024 16:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.024 16:08:16 -- host/discovery.sh@74 -- # jq '. | length' 00:31:14.024 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:14.024 16:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.024 16:08:16 -- host/discovery.sh@74 -- # notification_count=0 00:31:14.024 16:08:16 -- host/discovery.sh@75 -- # notify_id=2 00:31:14.024 16:08:16 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:31:14.024 16:08:16 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.024 16:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.024 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:14.024 [2024-07-22 16:08:16.845771] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:14.024 [2024-07-22 16:08:16.845952] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:14.024 16:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.024 16:08:16 -- host/discovery.sh@127 -- # sleep 1 00:31:14.024 [2024-07-22 16:08:16.850798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.024 [2024-07-22 16:08:16.850843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.024 [2024-07-22 16:08:16.850858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.024 [2024-07-22 16:08:16.850869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.024 [2024-07-22 16:08:16.850879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.024 [2024-07-22 16:08:16.850888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.024 [2024-07-22 16:08:16.850898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.024 [2024-07-22 16:08:16.850918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.024 [2024-07-22 16:08:16.850929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178fe70 is same with the state(5) to be set 00:31:14.024 [2024-07-22 16:08:16.851777] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:14.024 [2024-07-22 16:08:16.851811] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:14.024 [2024-07-22 16:08:16.851876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178fe70 (9): Bad file descriptor 00:31:15.400 16:08:17 -- host/discovery.sh@128 -- # get_subsystem_names 00:31:15.400 16:08:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:15.400 16:08:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:15.400 16:08:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.400 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:31:15.400 16:08:17 -- host/discovery.sh@59 -- # sort 00:31:15.400 16:08:17 -- host/discovery.sh@59 -- # xargs 00:31:15.400 16:08:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.400 16:08:17 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.400 16:08:17 -- host/discovery.sh@129 -- # get_bdev_list 00:31:15.400 16:08:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:15.400 16:08:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.400 16:08:17 -- host/discovery.sh@55 -- # sort 00:31:15.400 16:08:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.400 16:08:17 -- host/discovery.sh@55 -- # xargs 00:31:15.400 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:31:15.400 16:08:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.400 16:08:17 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:15.400 16:08:17 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:31:15.400 16:08:17 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:15.400 16:08:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.400 16:08:17 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:15.400 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:31:15.400 16:08:17 -- host/discovery.sh@63 -- # sort -n 00:31:15.400 16:08:17 -- host/discovery.sh@63 -- # xargs 00:31:15.400 16:08:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.400 16:08:17 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:31:15.400 16:08:17 -- host/discovery.sh@131 -- # get_notification_count 00:31:15.400 16:08:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:15.400 16:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.400 16:08:18 -- host/discovery.sh@74 -- # jq '. | length' 00:31:15.400 16:08:18 -- common/autotest_common.sh@10 -- # set +x 00:31:15.400 16:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.400 16:08:18 -- host/discovery.sh@74 -- # notification_count=0 00:31:15.400 16:08:18 -- host/discovery.sh@75 -- # notify_id=2 00:31:15.400 16:08:18 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:31:15.400 16:08:18 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:15.400 16:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.400 16:08:18 -- common/autotest_common.sh@10 -- # set +x 00:31:15.400 16:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.400 16:08:18 -- host/discovery.sh@135 -- # sleep 1 00:31:16.335 16:08:19 -- host/discovery.sh@136 -- # get_subsystem_names 00:31:16.335 16:08:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:16.335 16:08:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:16.335 16:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.335 16:08:19 -- common/autotest_common.sh@10 -- # set +x 00:31:16.335 16:08:19 -- host/discovery.sh@59 -- # sort 00:31:16.335 16:08:19 -- host/discovery.sh@59 -- # xargs 00:31:16.335 16:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.335 16:08:19 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:31:16.335 16:08:19 -- host/discovery.sh@137 -- # get_bdev_list 00:31:16.335 16:08:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.335 16:08:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.335 16:08:19 -- host/discovery.sh@55 -- # sort 00:31:16.335 16:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.335 16:08:19 -- common/autotest_common.sh@10 -- # set +x 00:31:16.335 16:08:19 -- host/discovery.sh@55 -- # xargs 00:31:16.335 16:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.335 16:08:19 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:31:16.335 16:08:19 -- host/discovery.sh@138 -- # get_notification_count 00:31:16.335 16:08:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:16.335 16:08:19 -- host/discovery.sh@74 -- # jq '. | length' 00:31:16.335 16:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.335 16:08:19 -- common/autotest_common.sh@10 -- # set +x 00:31:16.335 16:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.611 16:08:19 -- host/discovery.sh@74 -- # notification_count=2 00:31:16.611 16:08:19 -- host/discovery.sh@75 -- # notify_id=4 00:31:16.611 16:08:19 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:31:16.611 16:08:19 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.611 16:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.611 16:08:19 -- common/autotest_common.sh@10 -- # set +x 00:31:17.547 [2024-07-22 16:08:20.223902] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:17.547 [2024-07-22 16:08:20.224129] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:17.547 [2024-07-22 16:08:20.224197] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:17.547 [2024-07-22 16:08:20.229942] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:17.547 [2024-07-22 16:08:20.289336] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:17.547 [2024-07-22 16:08:20.289396] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:17.547 16:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.547 16:08:20 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:17.547 16:08:20 -- common/autotest_common.sh@640 -- # local es=0 00:31:17.547 16:08:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:17.547 16:08:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:17.547 16:08:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:17.547 16:08:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:17.547 16:08:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:17.547 16:08:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:17.547 16:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.547 16:08:20 -- common/autotest_common.sh@10 -- # set +x 00:31:17.547 request: 00:31:17.547 { 00:31:17.547 "name": "nvme", 00:31:17.547 "trtype": "tcp", 00:31:17.547 "traddr": "10.0.0.2", 00:31:17.547 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:17.547 "adrfam": "ipv4", 00:31:17.547 "trsvcid": "8009", 00:31:17.547 "wait_for_attach": true, 00:31:17.547 "method": "bdev_nvme_start_discovery", 00:31:17.547 "req_id": 1 00:31:17.547 } 00:31:17.547 Got JSON-RPC error response 00:31:17.547 response: 00:31:17.547 { 00:31:17.547 "code": -17, 00:31:17.547 "message": "File exists" 00:31:17.547 } 00:31:17.547 16:08:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:17.547 16:08:20 -- common/autotest_common.sh@643 -- # es=1 00:31:17.547 16:08:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:17.547 16:08:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:17.547 16:08:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:17.547 16:08:20 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:31:17.547 16:08:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:17.547 16:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.547 16:08:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:17.547 16:08:20 -- common/autotest_common.sh@10 -- # set +x 00:31:17.547 16:08:20 -- host/discovery.sh@67 -- # sort 00:31:17.547 16:08:20 -- host/discovery.sh@67 -- # xargs 00:31:17.547 16:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.547 16:08:20 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:31:17.547 16:08:20 -- host/discovery.sh@147 -- # get_bdev_list 00:31:17.547 16:08:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.547 16:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.547 16:08:20 -- common/autotest_common.sh@10 -- # set +x 00:31:17.547 16:08:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:17.547 16:08:20 -- host/discovery.sh@55 -- # sort 00:31:17.547 16:08:20 -- host/discovery.sh@55 -- # xargs 00:31:17.548 16:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.548 16:08:20 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:17.548 16:08:20 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:17.548 16:08:20 -- common/autotest_common.sh@640 -- # local es=0 00:31:17.548 16:08:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:17.548 16:08:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:17.806 16:08:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:17.806 16:08:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:17.806 16:08:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:17.806 16:08:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:17.806 16:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.806 16:08:20 -- common/autotest_common.sh@10 -- # set +x 00:31:17.806 request: 00:31:17.806 { 00:31:17.806 "name": "nvme_second", 00:31:17.806 "trtype": "tcp", 00:31:17.806 "traddr": "10.0.0.2", 00:31:17.806 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:17.806 "adrfam": "ipv4", 00:31:17.806 "trsvcid": "8009", 00:31:17.806 "wait_for_attach": true, 00:31:17.806 "method": "bdev_nvme_start_discovery", 00:31:17.806 "req_id": 1 00:31:17.806 } 00:31:17.806 Got JSON-RPC error response 00:31:17.806 response: 00:31:17.806 { 00:31:17.806 "code": -17, 00:31:17.806 "message": "File exists" 00:31:17.806 } 00:31:17.806 16:08:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:17.806 16:08:20 -- common/autotest_common.sh@643 -- # es=1 00:31:17.806 16:08:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:17.806 16:08:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:17.806 16:08:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:17.806 16:08:20 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:31:17.806 16:08:20 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:17.806 16:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.806 16:08:20 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:17.806 16:08:20 -- common/autotest_common.sh@10 -- # set +x 00:31:17.806 16:08:20 -- host/discovery.sh@67 -- # xargs 00:31:17.806 16:08:20 -- host/discovery.sh@67 -- # sort 00:31:17.806 16:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.806 16:08:20 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:31:17.806 16:08:20 -- host/discovery.sh@153 -- # get_bdev_list 00:31:17.806 16:08:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.806 16:08:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:17.806 16:08:20 -- host/discovery.sh@55 -- # sort 00:31:17.806 16:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.806 16:08:20 -- host/discovery.sh@55 -- # xargs 00:31:17.806 16:08:20 -- common/autotest_common.sh@10 -- # set +x 00:31:17.806 16:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.806 16:08:20 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:17.806 16:08:20 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:17.806 16:08:20 -- common/autotest_common.sh@640 -- # local es=0 00:31:17.806 16:08:20 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:17.806 16:08:20 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:17.806 16:08:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:17.806 16:08:20 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:17.806 16:08:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:17.806 16:08:20 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:17.806 16:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.806 16:08:20 -- common/autotest_common.sh@10 -- # set +x 00:31:18.742 [2024-07-22 16:08:21.547244] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.742 [2024-07-22 16:08:21.547379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.742 [2024-07-22 16:08:21.547429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.742 [2024-07-22 16:08:21.547448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178beb0 with addr=10.0.0.2, port=8010 00:31:18.742 [2024-07-22 16:08:21.547468] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:18.742 [2024-07-22 16:08:21.547478] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:18.742 [2024-07-22 16:08:21.547508] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:20.117 [2024-07-22 16:08:22.547232] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.117 [2024-07-22 16:08:22.547388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.117 [2024-07-22 16:08:22.547444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.117 [2024-07-22 16:08:22.547461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178beb0 with addr=10.0.0.2, port=8010 00:31:20.117 [2024-07-22 16:08:22.547480] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:20.117 [2024-07-22 16:08:22.547490] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:20.117 [2024-07-22 16:08:22.547500] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:21.053 [2024-07-22 16:08:23.547086] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:21.053 request: 00:31:21.053 { 00:31:21.053 "name": "nvme_second", 00:31:21.053 "trtype": "tcp", 00:31:21.053 "traddr": "10.0.0.2", 00:31:21.053 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:21.053 "adrfam": "ipv4", 00:31:21.053 "trsvcid": "8010", 00:31:21.053 "attach_timeout_ms": 3000, 00:31:21.053 "method": "bdev_nvme_start_discovery", 00:31:21.053 "req_id": 1 00:31:21.053 } 00:31:21.053 Got JSON-RPC error response 00:31:21.053 response: 00:31:21.053 { 00:31:21.053 "code": -110, 00:31:21.054 "message": "Connection timed out" 00:31:21.054 } 00:31:21.054 16:08:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:21.054 16:08:23 -- common/autotest_common.sh@643 -- # es=1 00:31:21.054 16:08:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:21.054 16:08:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:21.054 16:08:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:21.054 16:08:23 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:31:21.054 16:08:23 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:21.054 16:08:23 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:21.054 16:08:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.054 16:08:23 -- common/autotest_common.sh@10 -- # set +x 00:31:21.054 16:08:23 -- host/discovery.sh@67 -- # sort 00:31:21.054 16:08:23 -- host/discovery.sh@67 -- # xargs 00:31:21.054 16:08:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.054 16:08:23 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:31:21.054 16:08:23 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:31:21.054 16:08:23 -- host/discovery.sh@162 -- # kill 70402 00:31:21.054 16:08:23 -- host/discovery.sh@163 -- # nvmftestfini 00:31:21.054 16:08:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:21.054 16:08:23 -- nvmf/common.sh@116 -- # sync 00:31:21.054 16:08:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:21.054 16:08:23 -- nvmf/common.sh@119 -- # set +e 00:31:21.054 16:08:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:21.054 16:08:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:21.054 rmmod nvme_tcp 00:31:21.054 rmmod nvme_fabrics 00:31:21.054 rmmod nvme_keyring 00:31:21.054 16:08:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:21.054 16:08:23 -- nvmf/common.sh@123 -- # set -e 00:31:21.054 16:08:23 -- nvmf/common.sh@124 -- # return 0 00:31:21.054 16:08:23 -- nvmf/common.sh@477 -- # '[' -n 70370 ']' 00:31:21.054 16:08:23 -- nvmf/common.sh@478 -- # killprocess 70370 00:31:21.054 16:08:23 -- common/autotest_common.sh@926 -- # '[' -z 70370 ']' 00:31:21.054 16:08:23 -- common/autotest_common.sh@930 -- # kill -0 70370 00:31:21.054 16:08:23 -- common/autotest_common.sh@931 -- # uname 00:31:21.054 16:08:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:21.054 16:08:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70370 00:31:21.054 16:08:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:21.054 killing process with pid 70370 00:31:21.054 16:08:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:21.054 16:08:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70370' 00:31:21.054 16:08:23 -- common/autotest_common.sh@945 -- # kill 70370 00:31:21.054 16:08:23 -- common/autotest_common.sh@950 -- # wait 70370 00:31:21.054 16:08:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:21.054 16:08:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:21.054 16:08:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:21.054 16:08:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:21.054 16:08:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:21.054 16:08:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.054 16:08:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.054 16:08:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.330 16:08:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:31:21.330 00:31:21.330 real 0m13.712s 00:31:21.330 user 0m26.380s 00:31:21.330 sys 0m2.164s 00:31:21.330 16:08:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:21.330 ************************************ 00:31:21.330 END TEST nvmf_discovery 00:31:21.330 ************************************ 00:31:21.330 16:08:23 -- common/autotest_common.sh@10 -- # set +x 00:31:21.330 16:08:23 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:21.330 16:08:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:21.330 16:08:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:21.330 16:08:23 -- common/autotest_common.sh@10 -- # set +x 00:31:21.330 ************************************ 00:31:21.330 START TEST nvmf_discovery_remove_ifc 00:31:21.330 ************************************ 00:31:21.330 16:08:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:21.330 * Looking for test storage... 00:31:21.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:21.330 16:08:24 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:21.330 16:08:24 -- nvmf/common.sh@7 -- # uname -s 00:31:21.330 16:08:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.330 16:08:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.330 16:08:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.330 16:08:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.330 16:08:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.330 16:08:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.330 16:08:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.330 16:08:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.330 16:08:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.330 16:08:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.330 16:08:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:31:21.330 16:08:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:31:21.330 16:08:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.330 16:08:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.330 16:08:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:21.330 16:08:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:21.330 16:08:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.330 16:08:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.330 16:08:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.330 16:08:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.330 16:08:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.330 16:08:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.331 16:08:24 -- paths/export.sh@5 -- # export PATH 00:31:21.331 16:08:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.331 16:08:24 -- nvmf/common.sh@46 -- # : 0 00:31:21.331 16:08:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:21.331 16:08:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:21.331 16:08:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:21.331 16:08:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.331 16:08:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.331 16:08:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:21.331 16:08:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:21.331 16:08:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:21.331 16:08:24 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:21.331 16:08:24 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:21.331 16:08:24 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:21.331 16:08:24 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:21.331 16:08:24 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:21.331 16:08:24 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:21.331 16:08:24 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:21.331 16:08:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:21.331 16:08:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.331 16:08:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:21.331 16:08:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:21.331 16:08:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:21.331 16:08:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.331 16:08:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.331 16:08:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.331 16:08:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:31:21.331 16:08:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:31:21.331 16:08:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:31:21.331 16:08:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:31:21.331 16:08:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:31:21.331 16:08:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:31:21.331 16:08:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.331 16:08:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.331 16:08:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:21.331 16:08:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:31:21.331 16:08:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:21.331 16:08:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:21.331 16:08:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:21.331 16:08:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.331 16:08:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:21.331 16:08:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:21.331 16:08:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:21.331 16:08:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:21.331 16:08:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:31:21.331 16:08:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:31:21.331 Cannot find device "nvmf_tgt_br" 00:31:21.331 16:08:24 -- nvmf/common.sh@154 -- # true 00:31:21.331 16:08:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:31:21.331 Cannot find device "nvmf_tgt_br2" 00:31:21.331 16:08:24 -- nvmf/common.sh@155 -- # true 00:31:21.331 16:08:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:31:21.331 16:08:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:31:21.331 Cannot find device "nvmf_tgt_br" 00:31:21.331 16:08:24 -- nvmf/common.sh@157 -- # true 00:31:21.331 16:08:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:31:21.331 Cannot find device "nvmf_tgt_br2" 00:31:21.331 16:08:24 -- nvmf/common.sh@158 -- # true 00:31:21.331 16:08:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:31:21.331 16:08:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:31:21.589 16:08:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:21.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:21.589 16:08:24 -- nvmf/common.sh@161 -- # true 00:31:21.589 16:08:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:21.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:21.589 16:08:24 -- nvmf/common.sh@162 -- # true 00:31:21.589 16:08:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:31:21.589 16:08:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:21.589 16:08:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:21.589 16:08:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:21.589 16:08:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:21.589 16:08:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:21.589 16:08:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:21.589 16:08:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:21.589 16:08:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:21.589 16:08:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:31:21.589 16:08:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:31:21.589 16:08:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:31:21.589 16:08:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:31:21.589 16:08:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:21.589 16:08:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:21.589 16:08:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:21.589 16:08:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:31:21.589 16:08:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:31:21.589 16:08:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:31:21.589 16:08:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:21.589 16:08:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:21.589 16:08:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:21.589 16:08:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:21.589 16:08:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:31:21.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:31:21.589 00:31:21.589 --- 10.0.0.2 ping statistics --- 00:31:21.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.589 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:31:21.589 16:08:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:31:21.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:21.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:31:21.589 00:31:21.589 --- 10.0.0.3 ping statistics --- 00:31:21.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.589 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:31:21.589 16:08:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:21.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:31:21.589 00:31:21.589 --- 10.0.0.1 ping statistics --- 00:31:21.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.589 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:31:21.589 16:08:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.589 16:08:24 -- nvmf/common.sh@421 -- # return 0 00:31:21.589 16:08:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:21.589 16:08:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.589 16:08:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:21.589 16:08:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:21.589 16:08:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.589 16:08:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:21.589 16:08:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:21.589 16:08:24 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:21.589 16:08:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:21.589 16:08:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:21.589 16:08:24 -- common/autotest_common.sh@10 -- # set +x 00:31:21.589 16:08:24 -- nvmf/common.sh@469 -- # nvmfpid=70892 00:31:21.589 16:08:24 -- nvmf/common.sh@470 -- # waitforlisten 70892 00:31:21.589 16:08:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:21.589 16:08:24 -- common/autotest_common.sh@819 -- # '[' -z 70892 ']' 00:31:21.589 16:08:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.589 16:08:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:21.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.589 16:08:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.589 16:08:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:21.589 16:08:24 -- common/autotest_common.sh@10 -- # set +x 00:31:21.848 [2024-07-22 16:08:24.474590] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:21.848 [2024-07-22 16:08:24.474693] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.848 [2024-07-22 16:08:24.611613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.848 [2024-07-22 16:08:24.670553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:21.848 [2024-07-22 16:08:24.670729] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.848 [2024-07-22 16:08:24.670741] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.848 [2024-07-22 16:08:24.670750] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.848 [2024-07-22 16:08:24.670783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.782 16:08:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:22.782 16:08:25 -- common/autotest_common.sh@852 -- # return 0 00:31:22.782 16:08:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:22.782 16:08:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:22.782 16:08:25 -- common/autotest_common.sh@10 -- # set +x 00:31:22.782 16:08:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.782 16:08:25 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:22.782 16:08:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.782 16:08:25 -- common/autotest_common.sh@10 -- # set +x 00:31:22.782 [2024-07-22 16:08:25.531713] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.782 [2024-07-22 16:08:25.539890] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:22.782 null0 00:31:22.782 [2024-07-22 16:08:25.571833] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.782 16:08:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.782 16:08:25 -- host/discovery_remove_ifc.sh@59 -- # hostpid=70930 00:31:22.782 16:08:25 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:22.782 16:08:25 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 70930 /tmp/host.sock 00:31:22.782 16:08:25 -- common/autotest_common.sh@819 -- # '[' -z 70930 ']' 00:31:22.782 16:08:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:22.782 16:08:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:22.782 16:08:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:22.783 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:22.783 16:08:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:22.783 16:08:25 -- common/autotest_common.sh@10 -- # set +x 00:31:23.040 [2024-07-22 16:08:25.650530] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:23.040 [2024-07-22 16:08:25.650628] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70930 ] 00:31:23.040 [2024-07-22 16:08:25.796993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.040 [2024-07-22 16:08:25.872239] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:23.040 [2024-07-22 16:08:25.872433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.973 16:08:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:23.973 16:08:26 -- common/autotest_common.sh@852 -- # return 0 00:31:23.973 16:08:26 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:23.973 16:08:26 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:23.973 16:08:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.973 16:08:26 -- common/autotest_common.sh@10 -- # set +x 00:31:23.973 16:08:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.974 16:08:26 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:23.974 16:08:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.974 16:08:26 -- common/autotest_common.sh@10 -- # set +x 00:31:23.974 16:08:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.974 16:08:26 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:23.974 16:08:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.974 16:08:26 -- common/autotest_common.sh@10 -- # set +x 00:31:24.909 [2024-07-22 16:08:27.634856] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:24.909 [2024-07-22 16:08:27.634904] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:24.909 [2024-07-22 16:08:27.634935] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:24.909 [2024-07-22 16:08:27.640902] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:24.909 [2024-07-22 16:08:27.697131] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:24.909 [2024-07-22 16:08:27.697217] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:24.909 [2024-07-22 16:08:27.697246] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:24.909 [2024-07-22 16:08:27.697264] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:24.909 [2024-07-22 16:08:27.697291] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:24.909 16:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.909 16:08:27 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:24.909 16:08:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:24.909 16:08:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.909 16:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.909 16:08:27 -- common/autotest_common.sh@10 -- # set +x 00:31:24.909 16:08:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:24.909 16:08:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:24.909 [2024-07-22 16:08:27.703512] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c01bb0 was disconnected and freed. delete nvme_qpair. 00:31:24.909 16:08:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:24.909 16:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.909 16:08:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:24.910 16:08:27 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:31:24.910 16:08:27 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:31:25.168 16:08:27 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:25.168 16:08:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:25.168 16:08:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.168 16:08:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.168 16:08:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:25.168 16:08:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:25.168 16:08:27 -- common/autotest_common.sh@10 -- # set +x 00:31:25.168 16:08:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:25.168 16:08:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.168 16:08:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:25.168 16:08:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:26.131 16:08:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.131 16:08:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.131 16:08:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.131 16:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.131 16:08:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.131 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:31:26.131 16:08:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.131 16:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.131 16:08:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:26.131 16:08:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:27.065 16:08:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:27.065 16:08:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.065 16:08:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:27.065 16:08:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.065 16:08:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:27.065 16:08:29 -- common/autotest_common.sh@10 -- # set +x 00:31:27.065 16:08:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:27.065 16:08:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.323 16:08:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:27.323 16:08:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:28.258 16:08:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:28.258 16:08:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:28.258 16:08:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:28.258 16:08:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.258 16:08:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.258 16:08:30 -- common/autotest_common.sh@10 -- # set +x 00:31:28.258 16:08:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:28.258 16:08:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:28.258 16:08:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:28.258 16:08:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:29.192 16:08:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:29.192 16:08:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:29.192 16:08:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.192 16:08:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.192 16:08:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:29.192 16:08:32 -- common/autotest_common.sh@10 -- # set +x 00:31:29.192 16:08:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:29.192 16:08:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.451 16:08:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:29.451 16:08:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:30.432 16:08:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.432 16:08:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.432 16:08:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.432 16:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.432 16:08:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.432 16:08:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.432 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:31:30.432 16:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.432 16:08:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:30.432 16:08:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:30.432 [2024-07-22 16:08:33.124751] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:30.432 [2024-07-22 16:08:33.124817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.432 [2024-07-22 16:08:33.124834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.432 [2024-07-22 16:08:33.124848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.432 [2024-07-22 16:08:33.124858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.432 [2024-07-22 16:08:33.124869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.432 [2024-07-22 16:08:33.124878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.432 [2024-07-22 16:08:33.124889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.432 [2024-07-22 16:08:33.124898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.432 [2024-07-22 16:08:33.124908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:30.432 [2024-07-22 16:08:33.124918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:30.432 [2024-07-22 16:08:33.124927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b701d0 is same with the state(5) to be set 00:31:30.432 [2024-07-22 16:08:33.134745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b701d0 (9): Bad file descriptor 00:31:30.432 [2024-07-22 16:08:33.144767] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:31.366 16:08:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:31.366 16:08:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.366 16:08:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:31.366 16:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.366 16:08:34 -- common/autotest_common.sh@10 -- # set +x 00:31:31.366 16:08:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:31.366 16:08:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:31.366 [2024-07-22 16:08:34.176625] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:31:32.740 [2024-07-22 16:08:35.200623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:33.676 [2024-07-22 16:08:36.224627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:33.676 [2024-07-22 16:08:36.224758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b701d0 with addr=10.0.0.2, port=4420 00:31:33.676 [2024-07-22 16:08:36.224797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b701d0 is same with the state(5) to be set 00:31:33.676 [2024-07-22 16:08:36.224854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:33.676 [2024-07-22 16:08:36.224878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:33.676 [2024-07-22 16:08:36.224897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:33.676 [2024-07-22 16:08:36.224918] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:33.676 [2024-07-22 16:08:36.225740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b701d0 (9): Bad file descriptor 00:31:33.676 [2024-07-22 16:08:36.225804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.676 [2024-07-22 16:08:36.225853] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:33.676 [2024-07-22 16:08:36.225922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.676 [2024-07-22 16:08:36.225952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.676 [2024-07-22 16:08:36.225981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.676 [2024-07-22 16:08:36.226002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.676 [2024-07-22 16:08:36.226024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.676 [2024-07-22 16:08:36.226044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.676 [2024-07-22 16:08:36.226066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.676 [2024-07-22 16:08:36.226087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.676 [2024-07-22 16:08:36.226110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.676 [2024-07-22 16:08:36.226130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.676 [2024-07-22 16:08:36.226150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:33.676 [2024-07-22 16:08:36.226209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b70420 (9): Bad file descriptor 00:31:33.676 [2024-07-22 16:08:36.227217] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:33.676 [2024-07-22 16:08:36.227267] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:33.676 16:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.676 16:08:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:33.676 16:08:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.681 16:08:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.681 16:08:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.681 16:08:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.681 16:08:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.681 16:08:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.681 16:08:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:34.681 16:08:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:35.616 [2024-07-22 16:08:38.231189] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:35.616 [2024-07-22 16:08:38.231229] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:35.616 [2024-07-22 16:08:38.231249] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:35.616 [2024-07-22 16:08:38.237228] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:35.616 [2024-07-22 16:08:38.292431] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:35.616 [2024-07-22 16:08:38.292515] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:35.616 [2024-07-22 16:08:38.292542] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:35.616 [2024-07-22 16:08:38.292559] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:35.616 [2024-07-22 16:08:38.292569] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:35.616 [2024-07-22 16:08:38.299898] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bd3cb0 was disconnected and freed. delete nvme_qpair. 00:31:35.616 16:08:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:35.616 16:08:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.616 16:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.616 16:08:38 -- common/autotest_common.sh@10 -- # set +x 00:31:35.616 16:08:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:35.616 16:08:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:35.616 16:08:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:35.616 16:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.616 16:08:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:35.616 16:08:38 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:35.616 16:08:38 -- host/discovery_remove_ifc.sh@90 -- # killprocess 70930 00:31:35.616 16:08:38 -- common/autotest_common.sh@926 -- # '[' -z 70930 ']' 00:31:35.616 16:08:38 -- common/autotest_common.sh@930 -- # kill -0 70930 00:31:35.616 16:08:38 -- common/autotest_common.sh@931 -- # uname 00:31:35.616 16:08:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:35.616 16:08:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70930 00:31:35.875 killing process with pid 70930 00:31:35.875 16:08:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:35.875 16:08:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:35.875 16:08:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70930' 00:31:35.875 16:08:38 -- common/autotest_common.sh@945 -- # kill 70930 00:31:35.875 16:08:38 -- common/autotest_common.sh@950 -- # wait 70930 00:31:35.875 16:08:38 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:35.875 16:08:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:35.875 16:08:38 -- nvmf/common.sh@116 -- # sync 00:31:35.875 16:08:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:35.875 16:08:38 -- nvmf/common.sh@119 -- # set +e 00:31:35.875 16:08:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:35.875 16:08:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:35.875 rmmod nvme_tcp 00:31:35.875 rmmod nvme_fabrics 00:31:35.875 rmmod nvme_keyring 00:31:36.133 16:08:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:36.133 16:08:38 -- nvmf/common.sh@123 -- # set -e 00:31:36.133 16:08:38 -- nvmf/common.sh@124 -- # return 0 00:31:36.133 16:08:38 -- nvmf/common.sh@477 -- # '[' -n 70892 ']' 00:31:36.133 16:08:38 -- nvmf/common.sh@478 -- # killprocess 70892 00:31:36.133 16:08:38 -- common/autotest_common.sh@926 -- # '[' -z 70892 ']' 00:31:36.133 16:08:38 -- common/autotest_common.sh@930 -- # kill -0 70892 00:31:36.133 16:08:38 -- common/autotest_common.sh@931 -- # uname 00:31:36.133 16:08:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:36.133 16:08:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70892 00:31:36.133 killing process with pid 70892 00:31:36.134 16:08:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:36.134 16:08:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:36.134 16:08:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70892' 00:31:36.134 16:08:38 -- common/autotest_common.sh@945 -- # kill 70892 00:31:36.134 16:08:38 -- common/autotest_common.sh@950 -- # wait 70892 00:31:36.134 16:08:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:36.134 16:08:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:36.134 16:08:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:36.134 16:08:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:36.134 16:08:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:36.134 16:08:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.134 16:08:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:36.134 16:08:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.393 16:08:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:31:36.393 00:31:36.393 real 0m15.029s 00:31:36.393 user 0m24.320s 00:31:36.393 sys 0m2.309s 00:31:36.393 16:08:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.393 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:36.393 ************************************ 00:31:36.393 END TEST nvmf_discovery_remove_ifc 00:31:36.393 ************************************ 00:31:36.393 16:08:39 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:31:36.393 16:08:39 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:36.393 16:08:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:36.393 16:08:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.393 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:36.393 ************************************ 00:31:36.393 START TEST nvmf_digest 00:31:36.393 ************************************ 00:31:36.393 16:08:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:36.393 * Looking for test storage... 00:31:36.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:36.393 16:08:39 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:36.393 16:08:39 -- nvmf/common.sh@7 -- # uname -s 00:31:36.393 16:08:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.393 16:08:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.393 16:08:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.393 16:08:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.393 16:08:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.393 16:08:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.393 16:08:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.393 16:08:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.393 16:08:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.393 16:08:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.393 16:08:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:31:36.393 16:08:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:31:36.393 16:08:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.393 16:08:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.393 16:08:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:36.393 16:08:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:36.393 16:08:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.393 16:08:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.393 16:08:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.393 16:08:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.393 16:08:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.393 16:08:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.393 16:08:39 -- paths/export.sh@5 -- # export PATH 00:31:36.393 16:08:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.393 16:08:39 -- nvmf/common.sh@46 -- # : 0 00:31:36.393 16:08:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:36.394 16:08:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:36.394 16:08:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:36.394 16:08:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.394 16:08:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.394 16:08:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:36.394 16:08:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:36.394 16:08:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:36.394 16:08:39 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:36.394 16:08:39 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:36.394 16:08:39 -- host/digest.sh@16 -- # runtime=2 00:31:36.394 16:08:39 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:31:36.394 16:08:39 -- host/digest.sh@132 -- # nvmftestinit 00:31:36.394 16:08:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:36.394 16:08:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.394 16:08:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:36.394 16:08:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:36.394 16:08:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:36.394 16:08:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.394 16:08:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:36.394 16:08:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.394 16:08:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:31:36.394 16:08:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:31:36.394 16:08:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:31:36.394 16:08:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:31:36.394 16:08:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:31:36.394 16:08:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:31:36.394 16:08:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.394 16:08:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.394 16:08:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:36.394 16:08:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:31:36.394 16:08:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:36.394 16:08:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:36.394 16:08:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:36.394 16:08:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.394 16:08:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:36.394 16:08:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:36.394 16:08:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:36.394 16:08:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:36.394 16:08:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:31:36.394 16:08:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:31:36.394 Cannot find device "nvmf_tgt_br" 00:31:36.394 16:08:39 -- nvmf/common.sh@154 -- # true 00:31:36.394 16:08:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:31:36.394 Cannot find device "nvmf_tgt_br2" 00:31:36.394 16:08:39 -- nvmf/common.sh@155 -- # true 00:31:36.394 16:08:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:31:36.394 16:08:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:31:36.394 Cannot find device "nvmf_tgt_br" 00:31:36.394 16:08:39 -- nvmf/common.sh@157 -- # true 00:31:36.394 16:08:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:31:36.394 Cannot find device "nvmf_tgt_br2" 00:31:36.394 16:08:39 -- nvmf/common.sh@158 -- # true 00:31:36.394 16:08:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:31:36.652 16:08:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:31:36.652 16:08:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:36.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:36.652 16:08:39 -- nvmf/common.sh@161 -- # true 00:31:36.652 16:08:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:36.652 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:36.652 16:08:39 -- nvmf/common.sh@162 -- # true 00:31:36.652 16:08:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:31:36.652 16:08:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:36.652 16:08:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:36.653 16:08:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:36.653 16:08:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:36.653 16:08:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:36.653 16:08:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:36.653 16:08:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:36.653 16:08:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:36.653 16:08:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:31:36.653 16:08:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:31:36.653 16:08:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:31:36.653 16:08:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:31:36.653 16:08:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:36.653 16:08:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:36.653 16:08:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:36.653 16:08:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:31:36.653 16:08:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:31:36.653 16:08:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:31:36.653 16:08:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:36.653 16:08:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:36.653 16:08:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:36.653 16:08:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:36.653 16:08:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:31:36.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:31:36.653 00:31:36.653 --- 10.0.0.2 ping statistics --- 00:31:36.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.653 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:31:36.653 16:08:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:31:36.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:36.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:31:36.653 00:31:36.653 --- 10.0.0.3 ping statistics --- 00:31:36.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.653 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:31:36.653 16:08:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:36.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:31:36.653 00:31:36.653 --- 10.0.0.1 ping statistics --- 00:31:36.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.653 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:31:36.653 16:08:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.653 16:08:39 -- nvmf/common.sh@421 -- # return 0 00:31:36.653 16:08:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:36.653 16:08:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.653 16:08:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:36.653 16:08:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:36.653 16:08:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.653 16:08:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:36.912 16:08:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:36.912 16:08:39 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:36.912 16:08:39 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:31:36.912 16:08:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:36.912 16:08:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.912 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:36.912 ************************************ 00:31:36.912 START TEST nvmf_digest_clean 00:31:36.912 ************************************ 00:31:36.912 16:08:39 -- common/autotest_common.sh@1104 -- # run_digest 00:31:36.912 16:08:39 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:31:36.912 16:08:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:36.912 16:08:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:36.912 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:36.912 16:08:39 -- nvmf/common.sh@469 -- # nvmfpid=71343 00:31:36.912 16:08:39 -- nvmf/common.sh@470 -- # waitforlisten 71343 00:31:36.912 16:08:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:36.912 16:08:39 -- common/autotest_common.sh@819 -- # '[' -z 71343 ']' 00:31:36.912 16:08:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.912 16:08:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:36.912 16:08:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.912 16:08:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:36.912 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:36.912 [2024-07-22 16:08:39.594465] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:36.912 [2024-07-22 16:08:39.594561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.912 [2024-07-22 16:08:39.730181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.171 [2024-07-22 16:08:39.785524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:37.171 [2024-07-22 16:08:39.785662] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.171 [2024-07-22 16:08:39.785676] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.171 [2024-07-22 16:08:39.785685] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.171 [2024-07-22 16:08:39.785717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.171 16:08:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:37.171 16:08:39 -- common/autotest_common.sh@852 -- # return 0 00:31:37.171 16:08:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:37.171 16:08:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:37.171 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.171 16:08:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.171 16:08:39 -- host/digest.sh@120 -- # common_target_config 00:31:37.171 16:08:39 -- host/digest.sh@43 -- # rpc_cmd 00:31:37.171 16:08:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.171 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.171 null0 00:31:37.171 [2024-07-22 16:08:39.926649] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.171 [2024-07-22 16:08:39.950828] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.171 16:08:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.171 16:08:39 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:31:37.171 16:08:39 -- host/digest.sh@77 -- # local rw bs qd 00:31:37.171 16:08:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:37.171 16:08:39 -- host/digest.sh@80 -- # rw=randread 00:31:37.171 16:08:39 -- host/digest.sh@80 -- # bs=4096 00:31:37.171 16:08:39 -- host/digest.sh@80 -- # qd=128 00:31:37.171 16:08:39 -- host/digest.sh@82 -- # bperfpid=71368 00:31:37.171 16:08:39 -- host/digest.sh@83 -- # waitforlisten 71368 /var/tmp/bperf.sock 00:31:37.171 16:08:39 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:37.171 16:08:39 -- common/autotest_common.sh@819 -- # '[' -z 71368 ']' 00:31:37.171 16:08:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:37.171 16:08:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:37.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:37.171 16:08:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:37.171 16:08:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:37.171 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:37.171 [2024-07-22 16:08:40.004679] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:37.171 [2024-07-22 16:08:40.004822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71368 ] 00:31:37.429 [2024-07-22 16:08:40.141340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.429 [2024-07-22 16:08:40.200529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.368 16:08:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:38.368 16:08:40 -- common/autotest_common.sh@852 -- # return 0 00:31:38.368 16:08:40 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:38.368 16:08:40 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:38.368 16:08:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:38.626 16:08:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:38.626 16:08:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:38.884 nvme0n1 00:31:38.884 16:08:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:38.884 16:08:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:39.142 Running I/O for 2 seconds... 00:31:41.052 00:31:41.052 Latency(us) 00:31:41.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.052 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:41.052 nvme0n1 : 2.01 14593.30 57.01 0.00 0.00 8765.31 7983.48 21567.30 00:31:41.052 =================================================================================================================== 00:31:41.052 Total : 14593.30 57.01 0.00 0.00 8765.31 7983.48 21567.30 00:31:41.052 0 00:31:41.052 16:08:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:41.052 16:08:43 -- host/digest.sh@92 -- # get_accel_stats 00:31:41.052 16:08:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:41.052 16:08:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:41.052 16:08:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:41.052 | select(.opcode=="crc32c") 00:31:41.052 | "\(.module_name) \(.executed)"' 00:31:41.325 16:08:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:41.325 16:08:44 -- host/digest.sh@93 -- # exp_module=software 00:31:41.325 16:08:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:41.325 16:08:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:41.325 16:08:44 -- host/digest.sh@97 -- # killprocess 71368 00:31:41.325 16:08:44 -- common/autotest_common.sh@926 -- # '[' -z 71368 ']' 00:31:41.325 16:08:44 -- common/autotest_common.sh@930 -- # kill -0 71368 00:31:41.325 16:08:44 -- common/autotest_common.sh@931 -- # uname 00:31:41.325 16:08:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:41.325 16:08:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71368 00:31:41.325 16:08:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:41.325 16:08:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:41.325 16:08:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71368' 00:31:41.325 killing process with pid 71368 00:31:41.325 Received shutdown signal, test time was about 2.000000 seconds 00:31:41.325 00:31:41.325 Latency(us) 00:31:41.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.325 =================================================================================================================== 00:31:41.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:41.325 16:08:44 -- common/autotest_common.sh@945 -- # kill 71368 00:31:41.325 16:08:44 -- common/autotest_common.sh@950 -- # wait 71368 00:31:41.584 16:08:44 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:31:41.584 16:08:44 -- host/digest.sh@77 -- # local rw bs qd 00:31:41.584 16:08:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:41.584 16:08:44 -- host/digest.sh@80 -- # rw=randread 00:31:41.584 16:08:44 -- host/digest.sh@80 -- # bs=131072 00:31:41.584 16:08:44 -- host/digest.sh@80 -- # qd=16 00:31:41.584 16:08:44 -- host/digest.sh@82 -- # bperfpid=71428 00:31:41.584 16:08:44 -- host/digest.sh@83 -- # waitforlisten 71428 /var/tmp/bperf.sock 00:31:41.584 16:08:44 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:41.584 16:08:44 -- common/autotest_common.sh@819 -- # '[' -z 71428 ']' 00:31:41.584 16:08:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:41.584 16:08:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:41.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:41.584 16:08:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:41.584 16:08:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:41.584 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:31:41.584 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:41.584 Zero copy mechanism will not be used. 00:31:41.584 [2024-07-22 16:08:44.339523] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:41.584 [2024-07-22 16:08:44.339613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71428 ] 00:31:41.842 [2024-07-22 16:08:44.477775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.842 [2024-07-22 16:08:44.535386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.842 16:08:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:41.842 16:08:44 -- common/autotest_common.sh@852 -- # return 0 00:31:41.842 16:08:44 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:41.842 16:08:44 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:41.842 16:08:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:42.101 16:08:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:42.101 16:08:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:42.360 nvme0n1 00:31:42.360 16:08:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:42.360 16:08:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:42.618 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:42.618 Zero copy mechanism will not be used. 00:31:42.618 Running I/O for 2 seconds... 00:31:44.516 00:31:44.516 Latency(us) 00:31:44.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.516 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:44.516 nvme0n1 : 2.00 7294.13 911.77 0.00 0.00 2190.32 2040.55 6940.86 00:31:44.516 =================================================================================================================== 00:31:44.516 Total : 7294.13 911.77 0.00 0.00 2190.32 2040.55 6940.86 00:31:44.516 0 00:31:44.516 16:08:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:44.517 16:08:47 -- host/digest.sh@92 -- # get_accel_stats 00:31:44.517 16:08:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:44.517 | select(.opcode=="crc32c") 00:31:44.517 | "\(.module_name) \(.executed)"' 00:31:44.517 16:08:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:44.517 16:08:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:44.775 16:08:47 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:44.775 16:08:47 -- host/digest.sh@93 -- # exp_module=software 00:31:44.775 16:08:47 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:44.775 16:08:47 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:44.775 16:08:47 -- host/digest.sh@97 -- # killprocess 71428 00:31:44.775 16:08:47 -- common/autotest_common.sh@926 -- # '[' -z 71428 ']' 00:31:44.775 16:08:47 -- common/autotest_common.sh@930 -- # kill -0 71428 00:31:44.775 16:08:47 -- common/autotest_common.sh@931 -- # uname 00:31:44.775 16:08:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:44.775 16:08:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71428 00:31:45.033 16:08:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:45.033 killing process with pid 71428 00:31:45.033 16:08:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:45.033 16:08:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71428' 00:31:45.033 Received shutdown signal, test time was about 2.000000 seconds 00:31:45.033 00:31:45.033 Latency(us) 00:31:45.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.034 =================================================================================================================== 00:31:45.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:45.034 16:08:47 -- common/autotest_common.sh@945 -- # kill 71428 00:31:45.034 16:08:47 -- common/autotest_common.sh@950 -- # wait 71428 00:31:45.034 16:08:47 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:31:45.034 16:08:47 -- host/digest.sh@77 -- # local rw bs qd 00:31:45.034 16:08:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:45.034 16:08:47 -- host/digest.sh@80 -- # rw=randwrite 00:31:45.034 16:08:47 -- host/digest.sh@80 -- # bs=4096 00:31:45.034 16:08:47 -- host/digest.sh@80 -- # qd=128 00:31:45.034 16:08:47 -- host/digest.sh@82 -- # bperfpid=71481 00:31:45.034 16:08:47 -- host/digest.sh@83 -- # waitforlisten 71481 /var/tmp/bperf.sock 00:31:45.034 16:08:47 -- common/autotest_common.sh@819 -- # '[' -z 71481 ']' 00:31:45.034 16:08:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:45.034 16:08:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:45.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:45.034 16:08:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:45.034 16:08:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:45.034 16:08:47 -- common/autotest_common.sh@10 -- # set +x 00:31:45.034 16:08:47 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:45.292 [2024-07-22 16:08:47.911409] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:45.292 [2024-07-22 16:08:47.911516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71481 ] 00:31:45.292 [2024-07-22 16:08:48.049149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.292 [2024-07-22 16:08:48.107004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.292 16:08:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:45.292 16:08:48 -- common/autotest_common.sh@852 -- # return 0 00:31:45.292 16:08:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:45.292 16:08:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:45.292 16:08:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:45.859 16:08:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:45.859 16:08:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:45.859 nvme0n1 00:31:46.117 16:08:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:46.117 16:08:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:46.117 Running I/O for 2 seconds... 00:31:48.015 00:31:48.016 Latency(us) 00:31:48.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.016 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.016 nvme0n1 : 2.00 15658.74 61.17 0.00 0.00 8168.63 7864.32 16681.89 00:31:48.016 =================================================================================================================== 00:31:48.016 Total : 15658.74 61.17 0.00 0.00 8168.63 7864.32 16681.89 00:31:48.016 0 00:31:48.016 16:08:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:48.016 16:08:50 -- host/digest.sh@92 -- # get_accel_stats 00:31:48.016 16:08:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:48.016 | select(.opcode=="crc32c") 00:31:48.016 | "\(.module_name) \(.executed)"' 00:31:48.016 16:08:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:48.016 16:08:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:48.582 16:08:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:48.582 16:08:51 -- host/digest.sh@93 -- # exp_module=software 00:31:48.582 16:08:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:48.582 16:08:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:48.582 16:08:51 -- host/digest.sh@97 -- # killprocess 71481 00:31:48.582 16:08:51 -- common/autotest_common.sh@926 -- # '[' -z 71481 ']' 00:31:48.582 16:08:51 -- common/autotest_common.sh@930 -- # kill -0 71481 00:31:48.582 16:08:51 -- common/autotest_common.sh@931 -- # uname 00:31:48.582 16:08:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:48.582 16:08:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71481 00:31:48.582 16:08:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:48.582 16:08:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:48.582 killing process with pid 71481 00:31:48.582 16:08:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71481' 00:31:48.582 Received shutdown signal, test time was about 2.000000 seconds 00:31:48.582 00:31:48.582 Latency(us) 00:31:48.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.582 =================================================================================================================== 00:31:48.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:48.582 16:08:51 -- common/autotest_common.sh@945 -- # kill 71481 00:31:48.582 16:08:51 -- common/autotest_common.sh@950 -- # wait 71481 00:31:48.582 16:08:51 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:31:48.582 16:08:51 -- host/digest.sh@77 -- # local rw bs qd 00:31:48.582 16:08:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:48.582 16:08:51 -- host/digest.sh@80 -- # rw=randwrite 00:31:48.582 16:08:51 -- host/digest.sh@80 -- # bs=131072 00:31:48.582 16:08:51 -- host/digest.sh@80 -- # qd=16 00:31:48.582 16:08:51 -- host/digest.sh@82 -- # bperfpid=71529 00:31:48.582 16:08:51 -- host/digest.sh@83 -- # waitforlisten 71529 /var/tmp/bperf.sock 00:31:48.582 16:08:51 -- common/autotest_common.sh@819 -- # '[' -z 71529 ']' 00:31:48.582 16:08:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:48.582 16:08:51 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:48.582 16:08:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:48.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:48.582 16:08:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:48.582 16:08:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:48.582 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:31:48.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:48.582 Zero copy mechanism will not be used. 00:31:48.582 [2024-07-22 16:08:51.423921] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:48.582 [2024-07-22 16:08:51.424025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71529 ] 00:31:48.840 [2024-07-22 16:08:51.561867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.840 [2024-07-22 16:08:51.629668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.775 16:08:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:49.775 16:08:52 -- common/autotest_common.sh@852 -- # return 0 00:31:49.775 16:08:52 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:49.775 16:08:52 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:49.775 16:08:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:50.033 16:08:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:50.033 16:08:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:50.296 nvme0n1 00:31:50.296 16:08:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:50.296 16:08:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:50.296 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:50.297 Zero copy mechanism will not be used. 00:31:50.297 Running I/O for 2 seconds... 00:31:52.840 00:31:52.840 Latency(us) 00:31:52.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.840 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:52.840 nvme0n1 : 2.00 6586.11 823.26 0.00 0.00 2423.74 1526.69 3813.00 00:31:52.840 =================================================================================================================== 00:31:52.840 Total : 6586.11 823.26 0.00 0.00 2423.74 1526.69 3813.00 00:31:52.840 0 00:31:52.840 16:08:55 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:52.840 16:08:55 -- host/digest.sh@92 -- # get_accel_stats 00:31:52.840 16:08:55 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:52.840 16:08:55 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:52.840 | select(.opcode=="crc32c") 00:31:52.840 | "\(.module_name) \(.executed)"' 00:31:52.840 16:08:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:52.840 16:08:55 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:52.840 16:08:55 -- host/digest.sh@93 -- # exp_module=software 00:31:52.840 16:08:55 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:52.840 16:08:55 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:52.840 16:08:55 -- host/digest.sh@97 -- # killprocess 71529 00:31:52.840 16:08:55 -- common/autotest_common.sh@926 -- # '[' -z 71529 ']' 00:31:52.840 16:08:55 -- common/autotest_common.sh@930 -- # kill -0 71529 00:31:52.840 16:08:55 -- common/autotest_common.sh@931 -- # uname 00:31:52.840 16:08:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:52.840 16:08:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71529 00:31:52.840 16:08:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:52.840 16:08:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:52.840 killing process with pid 71529 00:31:52.840 16:08:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71529' 00:31:52.840 16:08:55 -- common/autotest_common.sh@945 -- # kill 71529 00:31:52.840 Received shutdown signal, test time was about 2.000000 seconds 00:31:52.840 00:31:52.840 Latency(us) 00:31:52.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.840 =================================================================================================================== 00:31:52.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:52.840 16:08:55 -- common/autotest_common.sh@950 -- # wait 71529 00:31:52.840 16:08:55 -- host/digest.sh@126 -- # killprocess 71343 00:31:52.840 16:08:55 -- common/autotest_common.sh@926 -- # '[' -z 71343 ']' 00:31:52.840 16:08:55 -- common/autotest_common.sh@930 -- # kill -0 71343 00:31:52.840 16:08:55 -- common/autotest_common.sh@931 -- # uname 00:31:52.840 16:08:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:52.840 16:08:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71343 00:31:52.840 16:08:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:52.840 killing process with pid 71343 00:31:52.840 16:08:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:52.840 16:08:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71343' 00:31:52.840 16:08:55 -- common/autotest_common.sh@945 -- # kill 71343 00:31:52.840 16:08:55 -- common/autotest_common.sh@950 -- # wait 71343 00:31:53.100 00:31:53.100 real 0m16.265s 00:31:53.100 user 0m31.818s 00:31:53.100 sys 0m4.486s 00:31:53.100 16:08:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:53.100 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:31:53.100 ************************************ 00:31:53.100 END TEST nvmf_digest_clean 00:31:53.100 ************************************ 00:31:53.100 16:08:55 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:31:53.100 16:08:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:53.100 16:08:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:53.100 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:31:53.100 ************************************ 00:31:53.100 START TEST nvmf_digest_error 00:31:53.100 ************************************ 00:31:53.100 16:08:55 -- common/autotest_common.sh@1104 -- # run_digest_error 00:31:53.100 16:08:55 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:31:53.100 16:08:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:53.100 16:08:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:53.100 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:31:53.100 16:08:55 -- nvmf/common.sh@469 -- # nvmfpid=71614 00:31:53.100 16:08:55 -- nvmf/common.sh@470 -- # waitforlisten 71614 00:31:53.100 16:08:55 -- common/autotest_common.sh@819 -- # '[' -z 71614 ']' 00:31:53.100 16:08:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.100 16:08:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:53.100 16:08:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:53.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.100 16:08:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.100 16:08:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:53.100 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:31:53.100 [2024-07-22 16:08:55.927613] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:53.100 [2024-07-22 16:08:55.927712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.359 [2024-07-22 16:08:56.070477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.359 [2024-07-22 16:08:56.141945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:53.359 [2024-07-22 16:08:56.142106] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.359 [2024-07-22 16:08:56.142120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.359 [2024-07-22 16:08:56.142130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.359 [2024-07-22 16:08:56.142166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.295 16:08:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:54.295 16:08:56 -- common/autotest_common.sh@852 -- # return 0 00:31:54.295 16:08:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:54.295 16:08:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:54.295 16:08:56 -- common/autotest_common.sh@10 -- # set +x 00:31:54.295 16:08:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.295 16:08:56 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:54.295 16:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.295 16:08:56 -- common/autotest_common.sh@10 -- # set +x 00:31:54.295 [2024-07-22 16:08:56.922684] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:54.295 16:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.295 16:08:56 -- host/digest.sh@104 -- # common_target_config 00:31:54.295 16:08:56 -- host/digest.sh@43 -- # rpc_cmd 00:31:54.295 16:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.295 16:08:56 -- common/autotest_common.sh@10 -- # set +x 00:31:54.295 null0 00:31:54.295 [2024-07-22 16:08:56.992993] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.295 [2024-07-22 16:08:57.017131] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.295 16:08:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.295 16:08:57 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:31:54.295 16:08:57 -- host/digest.sh@54 -- # local rw bs qd 00:31:54.295 16:08:57 -- host/digest.sh@56 -- # rw=randread 00:31:54.295 16:08:57 -- host/digest.sh@56 -- # bs=4096 00:31:54.295 16:08:57 -- host/digest.sh@56 -- # qd=128 00:31:54.295 16:08:57 -- host/digest.sh@58 -- # bperfpid=71653 00:31:54.295 16:08:57 -- host/digest.sh@60 -- # waitforlisten 71653 /var/tmp/bperf.sock 00:31:54.295 16:08:57 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:54.295 16:08:57 -- common/autotest_common.sh@819 -- # '[' -z 71653 ']' 00:31:54.295 16:08:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:54.295 16:08:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:54.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:54.295 16:08:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:54.295 16:08:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:54.295 16:08:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.295 [2024-07-22 16:08:57.080162] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:54.295 [2024-07-22 16:08:57.080286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71653 ] 00:31:54.554 [2024-07-22 16:08:57.221827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.554 [2024-07-22 16:08:57.292376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.530 16:08:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:55.530 16:08:58 -- common/autotest_common.sh@852 -- # return 0 00:31:55.530 16:08:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:55.530 16:08:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:55.530 16:08:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:55.530 16:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.530 16:08:58 -- common/autotest_common.sh@10 -- # set +x 00:31:55.530 16:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.530 16:08:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:55.530 16:08:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:55.788 nvme0n1 00:31:55.788 16:08:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:55.788 16:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.788 16:08:58 -- common/autotest_common.sh@10 -- # set +x 00:31:55.788 16:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.788 16:08:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:55.788 16:08:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:56.047 Running I/O for 2 seconds... 00:31:56.047 [2024-07-22 16:08:58.759815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.047 [2024-07-22 16:08:58.759882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.047 [2024-07-22 16:08:58.759898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.047 [2024-07-22 16:08:58.777537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.047 [2024-07-22 16:08:58.777586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.047 [2024-07-22 16:08:58.777612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.047 [2024-07-22 16:08:58.795481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.047 [2024-07-22 16:08:58.795548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.047 [2024-07-22 16:08:58.795563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.047 [2024-07-22 16:08:58.812984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.047 [2024-07-22 16:08:58.813036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.047 [2024-07-22 16:08:58.813051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.047 [2024-07-22 16:08:58.830589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.047 [2024-07-22 16:08:58.830644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.047 [2024-07-22 16:08:58.830659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.047 [2024-07-22 16:08:58.848022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.047 [2024-07-22 16:08:58.848083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.047 [2024-07-22 16:08:58.848099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.047 [2024-07-22 16:08:58.865650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.047 [2024-07-22 16:08:58.865715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.047 [2024-07-22 16:08:58.865730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.047 [2024-07-22 16:08:58.883012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.047 [2024-07-22 16:08:58.883058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.047 [2024-07-22 16:08:58.883072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.047 [2024-07-22 16:08:58.900339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.047 [2024-07-22 16:08:58.900386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.047 [2024-07-22 16:08:58.900401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.305 [2024-07-22 16:08:58.917795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.305 [2024-07-22 16:08:58.917844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.305 [2024-07-22 16:08:58.917859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:58.935358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:58.935406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:58.935421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:58.952748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:58.952791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:58.952806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:58.970238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:58.970287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:58.970302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:58.987624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:58.987664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:58.987678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.005153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.005194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.005209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.022593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.022635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.022650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.040152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.040197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.040212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.057448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.057503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.057519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.076136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.076193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.076210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.095235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.095310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.095325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.113003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.113043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.113057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.130905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.130965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.130979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.148739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.148772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.148785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.306 [2024-07-22 16:08:59.167330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.306 [2024-07-22 16:08:59.167368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.306 [2024-07-22 16:08:59.167382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.185449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.185502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.185519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.203414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.203473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.203500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.221309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.221346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.221360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.239305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.239343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.239357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.256789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.256850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.256864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.274625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.274713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.274728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.292640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.292681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.292695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.310264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.310316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.310330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.327854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.327903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.327918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.345161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.345222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.345236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.362985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.363047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.363062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.380459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.380510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.380524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.397809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.397848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.564 [2024-07-22 16:08:59.397862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.564 [2024-07-22 16:08:59.415655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.564 [2024-07-22 16:08:59.415738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.565 [2024-07-22 16:08:59.415753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.823 [2024-07-22 16:08:59.433965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.823 [2024-07-22 16:08:59.434040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-07-22 16:08:59.434055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.823 [2024-07-22 16:08:59.451872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.823 [2024-07-22 16:08:59.451911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-07-22 16:08:59.451926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.823 [2024-07-22 16:08:59.469454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.823 [2024-07-22 16:08:59.469504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-07-22 16:08:59.469519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.823 [2024-07-22 16:08:59.487086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.823 [2024-07-22 16:08:59.487125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-07-22 16:08:59.487139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.823 [2024-07-22 16:08:59.505333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.823 [2024-07-22 16:08:59.505407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.823 [2024-07-22 16:08:59.505422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.823 [2024-07-22 16:08:59.524188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.823 [2024-07-22 16:08:59.524272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.824 [2024-07-22 16:08:59.524303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.824 [2024-07-22 16:08:59.542753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.824 [2024-07-22 16:08:59.542837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.824 [2024-07-22 16:08:59.542852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.824 [2024-07-22 16:08:59.561039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.824 [2024-07-22 16:08:59.561141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.824 [2024-07-22 16:08:59.561155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.824 [2024-07-22 16:08:59.579319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.824 [2024-07-22 16:08:59.579384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.824 [2024-07-22 16:08:59.579400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.824 [2024-07-22 16:08:59.597489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.824 [2024-07-22 16:08:59.597565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.824 [2024-07-22 16:08:59.597597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.824 [2024-07-22 16:08:59.615592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.824 [2024-07-22 16:08:59.615667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.824 [2024-07-22 16:08:59.615683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.824 [2024-07-22 16:08:59.633727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.824 [2024-07-22 16:08:59.633798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.824 [2024-07-22 16:08:59.633813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.824 [2024-07-22 16:08:59.652192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.824 [2024-07-22 16:08:59.652260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.824 [2024-07-22 16:08:59.652275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.824 [2024-07-22 16:08:59.669956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:56.824 [2024-07-22 16:08:59.670014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.824 [2024-07-22 16:08:59.670028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.687842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.687913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.083 [2024-07-22 16:08:59.687929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.705792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.705859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.083 [2024-07-22 16:08:59.705875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.723962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.724033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.083 [2024-07-22 16:08:59.724048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.741527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.741582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.083 [2024-07-22 16:08:59.741597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.758949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.758996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.083 [2024-07-22 16:08:59.759010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.776378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.776426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.083 [2024-07-22 16:08:59.776440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.794185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.794232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.083 [2024-07-22 16:08:59.794246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.812241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.812292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.083 [2024-07-22 16:08:59.812306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.829699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.829745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.083 [2024-07-22 16:08:59.829759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.083 [2024-07-22 16:08:59.847538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.083 [2024-07-22 16:08:59.847600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.084 [2024-07-22 16:08:59.847616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.084 [2024-07-22 16:08:59.865082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.084 [2024-07-22 16:08:59.865155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.084 [2024-07-22 16:08:59.865171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.084 [2024-07-22 16:08:59.890410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.084 [2024-07-22 16:08:59.890470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.084 [2024-07-22 16:08:59.890497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.084 [2024-07-22 16:08:59.909097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.084 [2024-07-22 16:08:59.909175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.084 [2024-07-22 16:08:59.909190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.084 [2024-07-22 16:08:59.926820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.084 [2024-07-22 16:08:59.926861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.084 [2024-07-22 16:08:59.926875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.084 [2024-07-22 16:08:59.944313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.084 [2024-07-22 16:08:59.944360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.084 [2024-07-22 16:08:59.944375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:08:59.961907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:08:59.961982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:08:59.961996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:08:59.979697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:08:59.979752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:08:59.979765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:08:59.997220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:08:59.997258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:08:59.997272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.015872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.015948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.015964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.033930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.033988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.034005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.051849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.051914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.051930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.069601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.069670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.069686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.087624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.087692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.087707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.105835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.105922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.105940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.123627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.123697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.123714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.141650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.141713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.141729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.159798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.159861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.159876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.177742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.177804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.177833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.343 [2024-07-22 16:09:00.195470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.343 [2024-07-22 16:09:00.195538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.343 [2024-07-22 16:09:00.195552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.213035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.213083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.213098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.230536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.230583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.230597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.247942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.248008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.248022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.265618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.265671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.265686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.282985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.283030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.283044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.300395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.300446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.300461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.317828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.317882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.317896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.335413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.335468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.335483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.352760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.352811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.352825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.370479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.370562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.370577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.388050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.388105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.388120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.405516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.405575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.405590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.423165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.423223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.423238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.440496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.440545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.440560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.602 [2024-07-22 16:09:00.458056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.602 [2024-07-22 16:09:00.458115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.602 [2024-07-22 16:09:00.458131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.475609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.475662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.475677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.493169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.493230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.493245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.510723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.510784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.510810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.528333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.528401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.528416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.546667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.546729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.546743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.566022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.566080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.566094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.583925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.583987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.584002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.601407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.601461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.601476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.618736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.618783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.618797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.636200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.636257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.636271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.653644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.653702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.653718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.671361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.671423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.671438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.688727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.860 [2024-07-22 16:09:00.688774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.860 [2024-07-22 16:09:00.688789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.860 [2024-07-22 16:09:00.706183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:57.861 [2024-07-22 16:09:00.706243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.861 [2024-07-22 16:09:00.706265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.119 [2024-07-22 16:09:00.723944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:58.119 [2024-07-22 16:09:00.724005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.119 [2024-07-22 16:09:00.724038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.119 [2024-07-22 16:09:00.741337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a8340) 00:31:58.119 [2024-07-22 16:09:00.741399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.119 [2024-07-22 16:09:00.741414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:58.119 00:31:58.119 Latency(us) 00:31:58.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.119 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:58.119 nvme0n1 : 2.01 14224.04 55.56 0.00 0.00 8990.53 8460.10 35270.28 00:31:58.119 =================================================================================================================== 00:31:58.119 Total : 14224.04 55.56 0.00 0.00 8990.53 8460.10 35270.28 00:31:58.119 0 00:31:58.119 16:09:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:58.119 16:09:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:58.119 16:09:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:58.119 16:09:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:58.119 | .driver_specific 00:31:58.119 | .nvme_error 00:31:58.119 | .status_code 00:31:58.119 | .command_transient_transport_error' 00:31:58.378 16:09:01 -- host/digest.sh@71 -- # (( 112 > 0 )) 00:31:58.378 16:09:01 -- host/digest.sh@73 -- # killprocess 71653 00:31:58.378 16:09:01 -- common/autotest_common.sh@926 -- # '[' -z 71653 ']' 00:31:58.378 16:09:01 -- common/autotest_common.sh@930 -- # kill -0 71653 00:31:58.378 16:09:01 -- common/autotest_common.sh@931 -- # uname 00:31:58.378 16:09:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:58.378 16:09:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71653 00:31:58.378 16:09:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:58.378 16:09:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:58.378 16:09:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71653' 00:31:58.378 killing process with pid 71653 00:31:58.378 Received shutdown signal, test time was about 2.000000 seconds 00:31:58.378 00:31:58.378 Latency(us) 00:31:58.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.378 =================================================================================================================== 00:31:58.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:58.378 16:09:01 -- common/autotest_common.sh@945 -- # kill 71653 00:31:58.378 16:09:01 -- common/autotest_common.sh@950 -- # wait 71653 00:31:58.637 16:09:01 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:31:58.637 16:09:01 -- host/digest.sh@54 -- # local rw bs qd 00:31:58.637 16:09:01 -- host/digest.sh@56 -- # rw=randread 00:31:58.637 16:09:01 -- host/digest.sh@56 -- # bs=131072 00:31:58.637 16:09:01 -- host/digest.sh@56 -- # qd=16 00:31:58.637 16:09:01 -- host/digest.sh@58 -- # bperfpid=71708 00:31:58.637 16:09:01 -- host/digest.sh@60 -- # waitforlisten 71708 /var/tmp/bperf.sock 00:31:58.637 16:09:01 -- common/autotest_common.sh@819 -- # '[' -z 71708 ']' 00:31:58.637 16:09:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:58.637 16:09:01 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:58.637 16:09:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:58.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:58.637 16:09:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:58.637 16:09:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:58.637 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:31:58.637 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:58.637 Zero copy mechanism will not be used. 00:31:58.637 [2024-07-22 16:09:01.309992] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:58.637 [2024-07-22 16:09:01.310096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71708 ] 00:31:58.637 [2024-07-22 16:09:01.445154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.896 [2024-07-22 16:09:01.502372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.462 16:09:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:59.462 16:09:02 -- common/autotest_common.sh@852 -- # return 0 00:31:59.462 16:09:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:59.462 16:09:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:59.720 16:09:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:59.720 16:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.720 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:31:59.720 16:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.720 16:09:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:59.721 16:09:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:00.286 nvme0n1 00:32:00.286 16:09:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:00.286 16:09:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.286 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:32:00.286 16:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.286 16:09:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:00.286 16:09:02 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:00.286 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:00.286 Zero copy mechanism will not be used. 00:32:00.286 Running I/O for 2 seconds... 00:32:00.286 [2024-07-22 16:09:03.024605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.286 [2024-07-22 16:09:03.024676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.286 [2024-07-22 16:09:03.024692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.286 [2024-07-22 16:09:03.029350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.286 [2024-07-22 16:09:03.029412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.286 [2024-07-22 16:09:03.029428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.286 [2024-07-22 16:09:03.033920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.286 [2024-07-22 16:09:03.033967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.286 [2024-07-22 16:09:03.033982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.286 [2024-07-22 16:09:03.038308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.286 [2024-07-22 16:09:03.038348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.286 [2024-07-22 16:09:03.038362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.286 [2024-07-22 16:09:03.043047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.286 [2024-07-22 16:09:03.043094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.286 [2024-07-22 16:09:03.043109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.286 [2024-07-22 16:09:03.047822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.286 [2024-07-22 16:09:03.047866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.286 [2024-07-22 16:09:03.047881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.286 [2024-07-22 16:09:03.052235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.286 [2024-07-22 16:09:03.052274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.286 [2024-07-22 16:09:03.052288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.286 [2024-07-22 16:09:03.056771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.286 [2024-07-22 16:09:03.056812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.286 [2024-07-22 16:09:03.056826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.286 [2024-07-22 16:09:03.061232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.286 [2024-07-22 16:09:03.061274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.286 [2024-07-22 16:09:03.061289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.286 [2024-07-22 16:09:03.065667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.065705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.065718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.070139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.070179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.070193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.074687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.074728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.074742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.079168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.079209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.079223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.083720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.083760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.083774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.088794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.088857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.088879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.093838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.093885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.093900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.098525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.098565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.098579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.103566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.103629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.103652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.109655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.109716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.109737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.115545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.115609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.115633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.121574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.121637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.121660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.127506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.127569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.127593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.132240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.132286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.132301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.136775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.136828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.136847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.142641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.142728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.142754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.287 [2024-07-22 16:09:03.148656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.287 [2024-07-22 16:09:03.148729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.287 [2024-07-22 16:09:03.148752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.154771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.154841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.154867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.160800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.160874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.160898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.165830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.165894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.165910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.170458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.170536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.170551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.175059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.175106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.175121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.179469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.179522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.179536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.183925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.183962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.183975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.188272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.188308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.188321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.192663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.192697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.192711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.197090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.197127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.197141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.201516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.201551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.201564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.205987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.206025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.206038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.210443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.546 [2024-07-22 16:09:03.210479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.546 [2024-07-22 16:09:03.210505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.546 [2024-07-22 16:09:03.214924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.214960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.214974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.219391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.219426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.219440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.223853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.223889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.223902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.228479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.228533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.228548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.233048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.233105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.233121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.237726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.237793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.237810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.242413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.242473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.242498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.246971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.247013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.247027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.251460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.251514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.251529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.255920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.255961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.255975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.260400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.260440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.260454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.264845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.264887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.264901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.269280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.269320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.269334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.273799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.273838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.273852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.278255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.278294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.278308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.282718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.282757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.282771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.287120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.287157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.287172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.291539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.291574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.291588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.296010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.296046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.296060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.300498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.300533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.300546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.304938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.304975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.304989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.309402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.309439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.309453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.313876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.313914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.313928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.318382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.318419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.318432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.322895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.322945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.322960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.327404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.547 [2024-07-22 16:09:03.327439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.547 [2024-07-22 16:09:03.327453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.547 [2024-07-22 16:09:03.331791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.331829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.331842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.336293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.336332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.336346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.340819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.340857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.340871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.345173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.345212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.345225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.349516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.349554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.349567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.353880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.353919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.353933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.358377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.358419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.358433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.362928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.362969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.362983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.367520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.367576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.367591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.372159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.372219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.372234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.376731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.376774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.376788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.381232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.381273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.381286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.385676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.385715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.385728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.390154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.390193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.390207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.394576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.394612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.394626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.399058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.399094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.399107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.403520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.403557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.403570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.548 [2024-07-22 16:09:03.407885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.548 [2024-07-22 16:09:03.407922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.548 [2024-07-22 16:09:03.407935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.412322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.412359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.412373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.416810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.416858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.416872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.421300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.421366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.421381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.426029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.426094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.426109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.430602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.430680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.430695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.435201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.435241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.435255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.439758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.439798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.439812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.444143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.444180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.444195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.448555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.448590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.448604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.453059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.453100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.453114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.457528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.457565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.457578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.461993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.462029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.462043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.466365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.466400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.466414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.470807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.470842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.470856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.475250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.475286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.475299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.479608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.479642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.479655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.483978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.484014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.484027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.488400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.488445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.488458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.492908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.492943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.492957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.497305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.808 [2024-07-22 16:09:03.497339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.808 [2024-07-22 16:09:03.497353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.808 [2024-07-22 16:09:03.501717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.501751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.501764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.506246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.506281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.506294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.510747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.510781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.510794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.515219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.515254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.515267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.519708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.519742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.519756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.524330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.524393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.524407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.528884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.528941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.528955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.533399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.533454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.533469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.537938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.537989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.538003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.542450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.542498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.542513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.546770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.546804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.546817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.551200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.551235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.551249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.555605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.555640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.555654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.560116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.560152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.560165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.564554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.564587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.564600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.568949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.568984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.568997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.573388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.573423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.573436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.577829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.577864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.577877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.582247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.582281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.582295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.586594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.586629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.586641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.591044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.591081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.591094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.595578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.595627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.595640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.600048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.600084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.600096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.604304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.604339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.604351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.608730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.608763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.608776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.613155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.613190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.613202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.617537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.809 [2024-07-22 16:09:03.617570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.809 [2024-07-22 16:09:03.617583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.809 [2024-07-22 16:09:03.621987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.622023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.622035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.626478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.626523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.626536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.630898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.630941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.630954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.635344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.635391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.635404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.639804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.639840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.639854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.644323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.644358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.644372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.648760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.648794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.648807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.653210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.653246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.653261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.657649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.657683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.657697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.662056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.662092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.662105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.810 [2024-07-22 16:09:03.666480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:00.810 [2024-07-22 16:09:03.666526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.810 [2024-07-22 16:09:03.666539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.670923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.670957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.670970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.675359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.675394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.675407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.679795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.679840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.684291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.684327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.684340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.688733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.688767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.688780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.693062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.693097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.693109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.697410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.697455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.697468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.701849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.701883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.701896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.706307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.706343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.706356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.710704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.710739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.710752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.715162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.715196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.715209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.719554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.719588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.719601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.724003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.724038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.724051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.728424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.728459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.728472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.732872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.732906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.732919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.737272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.737306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.737320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.741574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.741607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.741620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.746075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.746111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.746124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.750601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.750635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.750647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.071 [2024-07-22 16:09:03.755069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.071 [2024-07-22 16:09:03.755103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.071 [2024-07-22 16:09:03.755115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.759566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.759600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.759613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.764005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.764040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.764053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.768403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.768438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.768462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.772925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.772979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.772993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.777459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.777521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.777536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.781988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.782024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.782037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.786498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.786531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.786543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.790931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.790966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.790979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.795436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.795471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.795499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.799867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.799903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.799916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.804414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.804450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.804463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.808892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.808928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.808940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.813272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.813308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.813321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.817777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.817812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.817825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.822254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.822288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.822301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.826896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.826948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.826962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.831549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.831590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.831603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.836144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.836184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.836198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.840612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.840661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.840675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.845044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.845093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.845107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.849547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.849591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.849604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.853997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.854050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.854064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.072 [2024-07-22 16:09:03.858603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.072 [2024-07-22 16:09:03.858642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.072 [2024-07-22 16:09:03.858670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.863045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.863082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.863095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.867463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.867522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.867537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.871890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.871938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.871952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.876365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.876417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.876431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.880967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.881027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.881040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.885583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.885628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.885643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.890240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.890308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.890323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.895132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.895198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.895213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.899800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.899852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.899866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.904311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.904364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.904377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.908919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.908975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.908989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.913543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.913598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.913613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.918028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.918078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.918092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.922572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.922618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.922631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.927071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.927119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.927134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.073 [2024-07-22 16:09:03.931754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.073 [2024-07-22 16:09:03.931808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.073 [2024-07-22 16:09:03.931823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.936220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.936279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.936294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.940810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.940867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.940881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.945350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.945396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.945409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.949870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.949928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.949942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.954234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.954274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.954288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.958694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.958744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.958758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.963190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.963228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.963242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.967583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.967620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.967633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.972157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.972229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.972245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.976845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.976896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.976911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.981214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.981255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.981269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.985703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.985755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.985768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.990266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.990324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.990338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.994753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.994795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.994809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:03.999272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:03.999311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:03.999325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.003672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.003712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.003725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.007992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.008045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.008059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.012437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.012498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.012514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.016796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.016836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.016850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.021107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.021146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.021159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.025458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.025517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.025532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.029864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.029902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.029916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.034063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.034101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.034115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.038447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.038499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.038514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.042799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.042854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.042868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.334 [2024-07-22 16:09:04.047216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.334 [2024-07-22 16:09:04.047260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.334 [2024-07-22 16:09:04.047274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.051471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.051533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.051547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.055852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.055913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.055928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.060435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.060511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.060526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.064813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.064880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.064894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.069551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.069602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.069616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.074141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.074198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.074228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.078760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.078804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.078818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.083241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.083286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.083300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.087775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.087828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.087842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.092312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.092364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.092378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.096762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.096807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.096822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.101262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.101312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.101326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.105615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.105652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.105666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.110032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.110096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.110110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.114509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.114562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.114582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.119027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.119076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.119090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.123489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.123543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.123560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.127883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.127924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.127938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.132373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.132416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.132430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.136909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.136962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.136976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.141334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.141389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.141403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.145655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.145722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.145735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.150053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.150105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.150119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.154511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.154573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.154586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.158968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.159022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.159037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.163490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.163538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.335 [2024-07-22 16:09:04.163551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.335 [2024-07-22 16:09:04.167872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.335 [2024-07-22 16:09:04.167906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.336 [2024-07-22 16:09:04.167919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.336 [2024-07-22 16:09:04.172314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.336 [2024-07-22 16:09:04.172359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.336 [2024-07-22 16:09:04.172372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.336 [2024-07-22 16:09:04.176753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.336 [2024-07-22 16:09:04.176819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.336 [2024-07-22 16:09:04.176832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.336 [2024-07-22 16:09:04.181245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.336 [2024-07-22 16:09:04.181280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.336 [2024-07-22 16:09:04.181293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.336 [2024-07-22 16:09:04.185851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.336 [2024-07-22 16:09:04.185900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.336 [2024-07-22 16:09:04.185912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.336 [2024-07-22 16:09:04.190383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.336 [2024-07-22 16:09:04.190419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.336 [2024-07-22 16:09:04.190432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.336 [2024-07-22 16:09:04.194942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.336 [2024-07-22 16:09:04.194987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.336 [2024-07-22 16:09:04.195000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.199439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.199479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.199507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.203931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.203978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.203992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.208560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.208609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.208623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.213061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.213111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.213124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.217666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.217714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.217727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.222080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.222112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.222125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.226401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.226435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.226447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.230796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.230829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.230854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.235324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.235363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.235376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.239889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.239924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.239937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.244418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.244464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.244477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.248776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.248811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.248824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.253030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.253064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.253076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.257344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.257391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.257405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.261859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.261920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.261933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.266387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.266440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.266454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.271031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.271070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.271084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.275400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.275436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.275450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.279775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.279809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.279823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.284245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.284291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.284304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.288698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.288732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.288744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.293008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.293043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.293055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.297397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.297432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.297444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.598 [2024-07-22 16:09:04.301712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.598 [2024-07-22 16:09:04.301745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.598 [2024-07-22 16:09:04.301758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.306105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.306154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.306166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.310600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.310632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.310645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.314954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.314988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.315001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.319338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.319372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.319386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.323636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.323671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.323683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.328059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.328093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.328107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.332540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.332574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.332587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.337002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.337037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.337050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.341395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.341429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.341442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.345789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.345823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.345836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.350208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.350246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.350259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.354621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.354667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.354681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.359142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.359199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.359213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.363795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.363867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.363881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.368428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.368496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.368514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.372996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.373070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.373084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.377581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.377646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.377661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.382341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.382395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.382409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.387162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.387219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.387234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.391727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.391798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.391812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.396100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.396171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.396185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.400708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.400775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.400789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.405216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.405289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.405303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.409959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.410030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.410060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.414643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.414711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.414724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.419030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.419081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.599 [2024-07-22 16:09:04.419096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.599 [2024-07-22 16:09:04.423745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.599 [2024-07-22 16:09:04.423811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.600 [2024-07-22 16:09:04.423824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.600 [2024-07-22 16:09:04.428288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.600 [2024-07-22 16:09:04.428340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.600 [2024-07-22 16:09:04.428355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.600 [2024-07-22 16:09:04.432906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.600 [2024-07-22 16:09:04.432963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.600 [2024-07-22 16:09:04.432977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.600 [2024-07-22 16:09:04.437483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.600 [2024-07-22 16:09:04.437533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.600 [2024-07-22 16:09:04.437547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.600 [2024-07-22 16:09:04.441960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.600 [2024-07-22 16:09:04.442012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.600 [2024-07-22 16:09:04.442025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.600 [2024-07-22 16:09:04.446422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.600 [2024-07-22 16:09:04.446456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.600 [2024-07-22 16:09:04.446469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.600 [2024-07-22 16:09:04.450970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.600 [2024-07-22 16:09:04.451025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.600 [2024-07-22 16:09:04.451038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.600 [2024-07-22 16:09:04.455424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.600 [2024-07-22 16:09:04.455479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.600 [2024-07-22 16:09:04.455508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.874 [2024-07-22 16:09:04.460205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.874 [2024-07-22 16:09:04.460272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.874 [2024-07-22 16:09:04.460288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.874 [2024-07-22 16:09:04.464813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.874 [2024-07-22 16:09:04.464870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.874 [2024-07-22 16:09:04.464884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.874 [2024-07-22 16:09:04.469467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.874 [2024-07-22 16:09:04.469539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.874 [2024-07-22 16:09:04.469555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.874 [2024-07-22 16:09:04.474048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.874 [2024-07-22 16:09:04.474106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.874 [2024-07-22 16:09:04.474121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.874 [2024-07-22 16:09:04.478503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.874 [2024-07-22 16:09:04.478566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.874 [2024-07-22 16:09:04.478581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.874 [2024-07-22 16:09:04.482974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.874 [2024-07-22 16:09:04.483037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.483052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.487440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.487515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.487531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.491962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.492034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.492050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.496824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.496901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.496917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.501564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.501635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.501652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.506186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.506247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.506261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.510641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.510686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.510701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.514963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.515000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.515013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.519349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.519385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.519399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.523715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.523765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.523779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.528076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.528113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.528126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.532494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.532569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.532584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.536989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.537046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.537062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.541590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.541636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.541651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.546040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.546079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.546093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.550435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.550473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.550497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.554843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.554879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.554891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.559216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.559256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.559269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.563614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.563671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.563686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.567945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.568004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.568020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.572441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.572498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.572514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.576770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.576810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.576824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.581125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.581161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.581175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.585566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.585604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.585618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.589928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.589977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.589991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.594247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.594289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.594303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.598614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.875 [2024-07-22 16:09:04.598650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.875 [2024-07-22 16:09:04.598663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.875 [2024-07-22 16:09:04.602933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.602973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.602986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.607358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.607394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.607407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.611743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.611779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.611793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.616096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.616131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.616145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.620448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.620497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.620512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.624746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.624781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.624794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.629183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.629218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.629231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.633562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.633596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.633609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.637933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.637967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.637980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.642458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.642508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.642522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.646966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.647001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.647015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.651378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.651414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.651427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.655742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.655777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.655790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.660048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.660083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.660097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.664403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.664438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.664461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.668784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.668820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.668833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.673189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.673225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.673239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.677546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.677581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.677594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.681927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.681964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.681977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.686191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.686238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.686251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.690558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.690593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.690606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.694926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.694963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.694976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.699285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.699323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.699336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.703562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.703599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.703614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.707898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.707940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.707954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.712435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.712501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.712518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.716930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.876 [2024-07-22 16:09:04.716999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.876 [2024-07-22 16:09:04.717014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:01.876 [2024-07-22 16:09:04.721422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.877 [2024-07-22 16:09:04.721469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.877 [2024-07-22 16:09:04.721497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:01.877 [2024-07-22 16:09:04.725855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.877 [2024-07-22 16:09:04.725900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.877 [2024-07-22 16:09:04.725914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:01.877 [2024-07-22 16:09:04.730193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.877 [2024-07-22 16:09:04.730230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.877 [2024-07-22 16:09:04.730244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:01.877 [2024-07-22 16:09:04.734566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:01.877 [2024-07-22 16:09:04.734601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:01.877 [2024-07-22 16:09:04.734615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.738936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.738970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.738983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.743353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.743389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.743402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.747721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.747755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.747768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.752063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.752098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.752110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.756330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.756365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.756377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.760745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.760779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.760793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.765066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.765100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.765113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.769511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.769544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.769557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.773879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.773914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.773927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.778215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.778251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.778264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.782682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.782742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.782756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.787145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.787203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.787218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.791745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.791801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.791815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.796181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.796238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.796253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.800633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.800678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.800691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.805014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.805050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.805063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.809350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.809385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.809398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.813649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.813684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.137 [2024-07-22 16:09:04.813697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.137 [2024-07-22 16:09:04.817947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.137 [2024-07-22 16:09:04.817982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.817995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.822318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.822353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.822366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.826692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.826727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.826750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.831028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.831062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.831075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.835396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.835431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.835444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.839756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.839791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.839805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.844118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.844152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.844166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.848411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.848445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.848458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.852790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.852824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.852838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.857101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.857137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.857150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.861377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.861413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.861426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.865693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.865727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.865741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.870056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.870100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.870113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.874517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.874572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.874586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.878955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.878997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.879011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.883304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.883343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.883357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.887733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.887775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.887788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.892102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.892139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.892153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.896422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.896459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.896472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.900869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.900921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.900934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.905668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.905753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.905780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.910254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.910314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.910329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.914644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.914690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.914704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.919176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.919221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.919236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.923659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.923701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.923716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.928013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.928052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.928065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.932394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.138 [2024-07-22 16:09:04.932432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.138 [2024-07-22 16:09:04.932446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.138 [2024-07-22 16:09:04.936794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.936830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.936844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.941254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.941309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.941324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.945548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.945597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.945611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.949964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.950012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.950026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.954336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.954386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.954401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.958722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.958772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.958786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.963278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.963338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.963353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.967784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.967844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.967858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.972241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.972304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.972318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.976751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.976802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.976816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.981131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.981177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.981191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.985537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.985573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.985586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.989961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.990010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.990024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.139 [2024-07-22 16:09:04.994416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.139 [2024-07-22 16:09:04.994457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.139 [2024-07-22 16:09:04.994516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.398 [2024-07-22 16:09:04.999060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.398 [2024-07-22 16:09:04.999119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.398 [2024-07-22 16:09:04.999134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.398 [2024-07-22 16:09:05.003501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.398 [2024-07-22 16:09:05.003540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.398 [2024-07-22 16:09:05.003554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.398 [2024-07-22 16:09:05.008075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.398 [2024-07-22 16:09:05.008113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.398 [2024-07-22 16:09:05.008127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:02.398 [2024-07-22 16:09:05.012590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.398 [2024-07-22 16:09:05.012649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.398 [2024-07-22 16:09:05.012665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:02.398 [2024-07-22 16:09:05.017121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.398 [2024-07-22 16:09:05.017188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.398 [2024-07-22 16:09:05.017203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:02.398 [2024-07-22 16:09:05.021637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c0ab70) 00:32:02.398 [2024-07-22 16:09:05.021690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.398 [2024-07-22 16:09:05.021704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:02.398 00:32:02.398 Latency(us) 00:32:02.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.398 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:02.398 nvme0n1 : 2.00 6889.18 861.15 0.00 0.00 2318.51 1995.87 6136.55 00:32:02.398 =================================================================================================================== 00:32:02.398 Total : 6889.18 861.15 0.00 0.00 2318.51 1995.87 6136.55 00:32:02.398 0 00:32:02.398 16:09:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:02.398 16:09:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:02.398 | .driver_specific 00:32:02.398 | .nvme_error 00:32:02.398 | .status_code 00:32:02.398 | .command_transient_transport_error' 00:32:02.398 16:09:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:02.398 16:09:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:02.656 16:09:05 -- host/digest.sh@71 -- # (( 445 > 0 )) 00:32:02.656 16:09:05 -- host/digest.sh@73 -- # killprocess 71708 00:32:02.656 16:09:05 -- common/autotest_common.sh@926 -- # '[' -z 71708 ']' 00:32:02.656 16:09:05 -- common/autotest_common.sh@930 -- # kill -0 71708 00:32:02.656 16:09:05 -- common/autotest_common.sh@931 -- # uname 00:32:02.656 16:09:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:02.656 16:09:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71708 00:32:02.656 16:09:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:02.656 killing process with pid 71708 00:32:02.656 16:09:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:02.656 16:09:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71708' 00:32:02.656 Received shutdown signal, test time was about 2.000000 seconds 00:32:02.656 00:32:02.656 Latency(us) 00:32:02.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.656 =================================================================================================================== 00:32:02.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:02.656 16:09:05 -- common/autotest_common.sh@945 -- # kill 71708 00:32:02.656 16:09:05 -- common/autotest_common.sh@950 -- # wait 71708 00:32:02.915 16:09:05 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:32:02.915 16:09:05 -- host/digest.sh@54 -- # local rw bs qd 00:32:02.915 16:09:05 -- host/digest.sh@56 -- # rw=randwrite 00:32:02.915 16:09:05 -- host/digest.sh@56 -- # bs=4096 00:32:02.915 16:09:05 -- host/digest.sh@56 -- # qd=128 00:32:02.915 16:09:05 -- host/digest.sh@58 -- # bperfpid=71768 00:32:02.915 16:09:05 -- host/digest.sh@60 -- # waitforlisten 71768 /var/tmp/bperf.sock 00:32:02.915 16:09:05 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:02.915 16:09:05 -- common/autotest_common.sh@819 -- # '[' -z 71768 ']' 00:32:02.915 16:09:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:02.915 16:09:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:02.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:02.915 16:09:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:02.915 16:09:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:02.915 16:09:05 -- common/autotest_common.sh@10 -- # set +x 00:32:02.915 [2024-07-22 16:09:05.604352] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:02.915 [2024-07-22 16:09:05.604479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71768 ] 00:32:02.915 [2024-07-22 16:09:05.746149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.173 [2024-07-22 16:09:05.805790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.740 16:09:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:03.740 16:09:06 -- common/autotest_common.sh@852 -- # return 0 00:32:03.740 16:09:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:03.740 16:09:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:03.998 16:09:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:03.998 16:09:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.998 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:32:03.998 16:09:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.998 16:09:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:03.998 16:09:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:04.256 nvme0n1 00:32:04.515 16:09:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:04.515 16:09:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.515 16:09:07 -- common/autotest_common.sh@10 -- # set +x 00:32:04.515 16:09:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.515 16:09:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:04.515 16:09:07 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:04.515 Running I/O for 2 seconds... 00:32:04.515 [2024-07-22 16:09:07.254644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ddc00 00:32:04.515 [2024-07-22 16:09:07.256117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.515 [2024-07-22 16:09:07.256180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:04.515 [2024-07-22 16:09:07.271567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fef90 00:32:04.515 [2024-07-22 16:09:07.272950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.515 [2024-07-22 16:09:07.272994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.515 [2024-07-22 16:09:07.288424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ff3c8 00:32:04.515 [2024-07-22 16:09:07.289805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.515 [2024-07-22 16:09:07.289846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:04.515 [2024-07-22 16:09:07.305234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190feb58 00:32:04.515 [2024-07-22 16:09:07.306623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.515 [2024-07-22 16:09:07.306661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:04.515 [2024-07-22 16:09:07.322058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fe720 00:32:04.515 [2024-07-22 16:09:07.323469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.515 [2024-07-22 16:09:07.323524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:04.515 [2024-07-22 16:09:07.338874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fe2e8 00:32:04.515 [2024-07-22 16:09:07.340227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.515 [2024-07-22 16:09:07.340265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:04.515 [2024-07-22 16:09:07.355659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fdeb0 00:32:04.515 [2024-07-22 16:09:07.356989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.515 [2024-07-22 16:09:07.357028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:04.515 [2024-07-22 16:09:07.372413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fda78 00:32:04.515 [2024-07-22 16:09:07.373747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.515 [2024-07-22 16:09:07.373787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.390230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fd640 00:32:04.774 [2024-07-22 16:09:07.391626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.391691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.408192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fd208 00:32:04.774 [2024-07-22 16:09:07.409521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.409565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.425082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fcdd0 00:32:04.774 [2024-07-22 16:09:07.426369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.426413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.441907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fc998 00:32:04.774 [2024-07-22 16:09:07.443246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.443287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.458814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fc560 00:32:04.774 [2024-07-22 16:09:07.460111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.460157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.475879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fc128 00:32:04.774 [2024-07-22 16:09:07.477178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.477231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.492777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fbcf0 00:32:04.774 [2024-07-22 16:09:07.494029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.494071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.509626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fb8b8 00:32:04.774 [2024-07-22 16:09:07.510901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.510957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.526503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fb480 00:32:04.774 [2024-07-22 16:09:07.527768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.527809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.543235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fb048 00:32:04.774 [2024-07-22 16:09:07.544450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.544505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.560346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fac10 00:32:04.774 [2024-07-22 16:09:07.561590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.561633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.577211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fa7d8 00:32:04.774 [2024-07-22 16:09:07.578442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.578525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.595916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190fa3a0 00:32:04.774 [2024-07-22 16:09:07.597146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.597214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.614445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f9f68 00:32:04.774 [2024-07-22 16:09:07.615700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.615745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:04.774 [2024-07-22 16:09:07.631400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f9b30 00:32:04.774 [2024-07-22 16:09:07.632604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.774 [2024-07-22 16:09:07.632644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:05.033 [2024-07-22 16:09:07.648706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f96f8 00:32:05.033 [2024-07-22 16:09:07.649875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.033 [2024-07-22 16:09:07.649916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:05.033 [2024-07-22 16:09:07.665874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f92c0 00:32:05.033 [2024-07-22 16:09:07.667071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.033 [2024-07-22 16:09:07.667112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:05.033 [2024-07-22 16:09:07.683211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f8e88 00:32:05.033 [2024-07-22 16:09:07.684382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.033 [2024-07-22 16:09:07.684429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:05.033 [2024-07-22 16:09:07.700400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f8a50 00:32:05.034 [2024-07-22 16:09:07.701575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.701622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.717662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f8618 00:32:05.034 [2024-07-22 16:09:07.718820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.718859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.734823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f81e0 00:32:05.034 [2024-07-22 16:09:07.735968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.736008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.752042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f7da8 00:32:05.034 [2024-07-22 16:09:07.753163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.753205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.769263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f7970 00:32:05.034 [2024-07-22 16:09:07.770370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.770415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.786382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f7538 00:32:05.034 [2024-07-22 16:09:07.787531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.787570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.803284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f7100 00:32:05.034 [2024-07-22 16:09:07.804369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.804407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.820281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f6cc8 00:32:05.034 [2024-07-22 16:09:07.821379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.821427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.837520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f6890 00:32:05.034 [2024-07-22 16:09:07.838624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.838667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.854845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f6458 00:32:05.034 [2024-07-22 16:09:07.855918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.855959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.872158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f6020 00:32:05.034 [2024-07-22 16:09:07.873211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.873255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:05.034 [2024-07-22 16:09:07.889922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f5be8 00:32:05.034 [2024-07-22 16:09:07.891061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.034 [2024-07-22 16:09:07.891100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:07.907777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f57b0 00:32:05.293 [2024-07-22 16:09:07.908815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:07.908871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:07.925454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f5378 00:32:05.293 [2024-07-22 16:09:07.926523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:07.926578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:07.942811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f4f40 00:32:05.293 [2024-07-22 16:09:07.943837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:07.943891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:07.959830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f4b08 00:32:05.293 [2024-07-22 16:09:07.960904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:07.960940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:07.977092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f46d0 00:32:05.293 [2024-07-22 16:09:07.978115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:07.978167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:07.994724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f4298 00:32:05.293 [2024-07-22 16:09:07.995769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:07.995823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:08.011983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f3e60 00:32:05.293 [2024-07-22 16:09:08.012952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:08.013024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:08.028987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f3a28 00:32:05.293 [2024-07-22 16:09:08.029962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:08.030028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:08.046319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f35f0 00:32:05.293 [2024-07-22 16:09:08.047291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:08.047333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:08.063804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f31b8 00:32:05.293 [2024-07-22 16:09:08.064768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.293 [2024-07-22 16:09:08.064809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:05.293 [2024-07-22 16:09:08.080979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f2d80 00:32:05.294 [2024-07-22 16:09:08.081906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.294 [2024-07-22 16:09:08.081945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:05.294 [2024-07-22 16:09:08.098345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f2948 00:32:05.294 [2024-07-22 16:09:08.099260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.294 [2024-07-22 16:09:08.099298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:05.294 [2024-07-22 16:09:08.115403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f2510 00:32:05.294 [2024-07-22 16:09:08.116268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.294 [2024-07-22 16:09:08.116303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:05.294 [2024-07-22 16:09:08.132333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f20d8 00:32:05.294 [2024-07-22 16:09:08.133200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.294 [2024-07-22 16:09:08.133238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:05.294 [2024-07-22 16:09:08.149947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f1ca0 00:32:05.294 [2024-07-22 16:09:08.150848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.294 [2024-07-22 16:09:08.150894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:05.552 [2024-07-22 16:09:08.167754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f1868 00:32:05.552 [2024-07-22 16:09:08.168652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.552 [2024-07-22 16:09:08.168694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:05.552 [2024-07-22 16:09:08.185611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f1430 00:32:05.552 [2024-07-22 16:09:08.186509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.186558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.203228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f0ff8 00:32:05.553 [2024-07-22 16:09:08.204094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.204136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.220434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f0bc0 00:32:05.553 [2024-07-22 16:09:08.221280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.221323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.237348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f0788 00:32:05.553 [2024-07-22 16:09:08.238214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.238256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.255218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190f0350 00:32:05.553 [2024-07-22 16:09:08.256102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.256139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.272843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190eff18 00:32:05.553 [2024-07-22 16:09:08.273685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.273727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.290415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190efae0 00:32:05.553 [2024-07-22 16:09:08.291259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.291301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.307458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ef6a8 00:32:05.553 [2024-07-22 16:09:08.308230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.308268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.324554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ef270 00:32:05.553 [2024-07-22 16:09:08.325332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.325371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.341648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190eee38 00:32:05.553 [2024-07-22 16:09:08.342385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.342423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.361015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190eea00 00:32:05.553 [2024-07-22 16:09:08.361964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.362004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.380827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ee5c8 00:32:05.553 [2024-07-22 16:09:08.381784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.381822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:05.553 [2024-07-22 16:09:08.400141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ee190 00:32:05.553 [2024-07-22 16:09:08.400981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.553 [2024-07-22 16:09:08.401017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.419614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190edd58 00:32:05.812 [2024-07-22 16:09:08.420481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.420525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.438162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ed920 00:32:05.812 [2024-07-22 16:09:08.438859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.438897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.454997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ed4e8 00:32:05.812 [2024-07-22 16:09:08.455696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.455735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.471675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ed0b0 00:32:05.812 [2024-07-22 16:09:08.472339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.472375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.488582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ecc78 00:32:05.812 [2024-07-22 16:09:08.489238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.489278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.505216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ec840 00:32:05.812 [2024-07-22 16:09:08.505868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.505904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.521819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ec408 00:32:05.812 [2024-07-22 16:09:08.522450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.522502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.538691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ebfd0 00:32:05.812 [2024-07-22 16:09:08.539313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.539347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.556112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ebb98 00:32:05.812 [2024-07-22 16:09:08.556783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.556827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.573416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190eb760 00:32:05.812 [2024-07-22 16:09:08.574022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.574058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.590376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190eb328 00:32:05.812 [2024-07-22 16:09:08.591027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.591073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.607584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190eaef0 00:32:05.812 [2024-07-22 16:09:08.608171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.608202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.624599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190eaab8 00:32:05.812 [2024-07-22 16:09:08.625196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.625230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.641484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ea680 00:32:05.812 [2024-07-22 16:09:08.642071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.642101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:05.812 [2024-07-22 16:09:08.658306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190ea248 00:32:05.812 [2024-07-22 16:09:08.658902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:05.812 [2024-07-22 16:09:08.658942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:06.071 [2024-07-22 16:09:08.675056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e9e10 00:32:06.071 [2024-07-22 16:09:08.675632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.071 [2024-07-22 16:09:08.675667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:06.071 [2024-07-22 16:09:08.692529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e99d8 00:32:06.071 [2024-07-22 16:09:08.693136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.071 [2024-07-22 16:09:08.693171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:06.071 [2024-07-22 16:09:08.709086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e95a0 00:32:06.071 [2024-07-22 16:09:08.709687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.071 [2024-07-22 16:09:08.709722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.725676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e9168 00:32:06.072 [2024-07-22 16:09:08.726242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.726276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.742866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e8d30 00:32:06.072 [2024-07-22 16:09:08.743462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.743506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.759477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e88f8 00:32:06.072 [2024-07-22 16:09:08.759986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.760017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.775739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e84c0 00:32:06.072 [2024-07-22 16:09:08.776264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.776301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.792217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e8088 00:32:06.072 [2024-07-22 16:09:08.792758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.792790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.808727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e7c50 00:32:06.072 [2024-07-22 16:09:08.809284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.809318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.825048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e7818 00:32:06.072 [2024-07-22 16:09:08.825521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.825576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.841514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e73e0 00:32:06.072 [2024-07-22 16:09:08.841995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.842029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.858858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e6fa8 00:32:06.072 [2024-07-22 16:09:08.859317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.859350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.875705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e6b70 00:32:06.072 [2024-07-22 16:09:08.876112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.876143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.892523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e6738 00:32:06.072 [2024-07-22 16:09:08.892908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.892942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.909383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e6300 00:32:06.072 [2024-07-22 16:09:08.909825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.909859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:06.072 [2024-07-22 16:09:08.926347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e5ec8 00:32:06.072 [2024-07-22 16:09:08.926728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.072 [2024-07-22 16:09:08.926756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:06.331 [2024-07-22 16:09:08.943010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e5a90 00:32:06.331 [2024-07-22 16:09:08.943372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.331 [2024-07-22 16:09:08.943403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:06.331 [2024-07-22 16:09:08.960029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e5658 00:32:06.331 [2024-07-22 16:09:08.960376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.331 [2024-07-22 16:09:08.960405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:06.331 [2024-07-22 16:09:08.976877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e5220 00:32:06.331 [2024-07-22 16:09:08.977217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.331 [2024-07-22 16:09:08.977246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:06.331 [2024-07-22 16:09:08.993970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e4de8 00:32:06.331 [2024-07-22 16:09:08.994338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.331 [2024-07-22 16:09:08.994367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:06.331 [2024-07-22 16:09:09.010458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e49b0 00:32:06.331 [2024-07-22 16:09:09.010831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.331 [2024-07-22 16:09:09.010860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:06.331 [2024-07-22 16:09:09.027012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e4578 00:32:06.331 [2024-07-22 16:09:09.027318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.331 [2024-07-22 16:09:09.027344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:06.331 [2024-07-22 16:09:09.043600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e4140 00:32:06.331 [2024-07-22 16:09:09.043897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.331 [2024-07-22 16:09:09.043925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:06.331 [2024-07-22 16:09:09.060434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e3d08 00:32:06.331 [2024-07-22 16:09:09.060750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.332 [2024-07-22 16:09:09.060780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:06.332 [2024-07-22 16:09:09.076924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e38d0 00:32:06.332 [2024-07-22 16:09:09.077196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.332 [2024-07-22 16:09:09.077222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:06.332 [2024-07-22 16:09:09.093262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e3498 00:32:06.332 [2024-07-22 16:09:09.093605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.332 [2024-07-22 16:09:09.093636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:06.332 [2024-07-22 16:09:09.110270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e3060 00:32:06.332 [2024-07-22 16:09:09.110543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.332 [2024-07-22 16:09:09.110570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:06.332 [2024-07-22 16:09:09.127037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e2c28 00:32:06.332 [2024-07-22 16:09:09.127287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.332 [2024-07-22 16:09:09.127311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:06.332 [2024-07-22 16:09:09.143987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e27f0 00:32:06.332 [2024-07-22 16:09:09.144240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.332 [2024-07-22 16:09:09.144265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:06.332 [2024-07-22 16:09:09.161226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e23b8 00:32:06.332 [2024-07-22 16:09:09.161477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.332 [2024-07-22 16:09:09.161517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:06.332 [2024-07-22 16:09:09.178209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e1f80 00:32:06.332 [2024-07-22 16:09:09.178427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.332 [2024-07-22 16:09:09.178451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:06.590 [2024-07-22 16:09:09.195176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e1b48 00:32:06.590 [2024-07-22 16:09:09.195400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.590 [2024-07-22 16:09:09.195434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:06.590 [2024-07-22 16:09:09.212132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e1710 00:32:06.590 [2024-07-22 16:09:09.212325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.590 [2024-07-22 16:09:09.212350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:06.590 [2024-07-22 16:09:09.228781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf88030) with pdu=0x2000190e12d8 00:32:06.590 [2024-07-22 16:09:09.228988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.590 [2024-07-22 16:09:09.229013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:06.590 00:32:06.590 Latency(us) 00:32:06.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.590 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.590 nvme0n1 : 2.00 14765.45 57.68 0.00 0.00 8662.03 7685.59 24069.59 00:32:06.590 =================================================================================================================== 00:32:06.590 Total : 14765.45 57.68 0.00 0.00 8662.03 7685.59 24069.59 00:32:06.590 0 00:32:06.590 16:09:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:06.590 16:09:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:06.590 16:09:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:06.591 16:09:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:06.591 | .driver_specific 00:32:06.591 | .nvme_error 00:32:06.591 | .status_code 00:32:06.591 | .command_transient_transport_error' 00:32:06.849 16:09:09 -- host/digest.sh@71 -- # (( 116 > 0 )) 00:32:06.849 16:09:09 -- host/digest.sh@73 -- # killprocess 71768 00:32:06.849 16:09:09 -- common/autotest_common.sh@926 -- # '[' -z 71768 ']' 00:32:06.849 16:09:09 -- common/autotest_common.sh@930 -- # kill -0 71768 00:32:06.849 16:09:09 -- common/autotest_common.sh@931 -- # uname 00:32:06.849 16:09:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:06.849 16:09:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71768 00:32:06.849 16:09:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:06.849 killing process with pid 71768 00:32:06.849 16:09:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:06.849 16:09:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71768' 00:32:06.849 Received shutdown signal, test time was about 2.000000 seconds 00:32:06.849 00:32:06.849 Latency(us) 00:32:06.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.849 =================================================================================================================== 00:32:06.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:06.849 16:09:09 -- common/autotest_common.sh@945 -- # kill 71768 00:32:06.849 16:09:09 -- common/autotest_common.sh@950 -- # wait 71768 00:32:07.108 16:09:09 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:32:07.108 16:09:09 -- host/digest.sh@54 -- # local rw bs qd 00:32:07.108 16:09:09 -- host/digest.sh@56 -- # rw=randwrite 00:32:07.108 16:09:09 -- host/digest.sh@56 -- # bs=131072 00:32:07.108 16:09:09 -- host/digest.sh@56 -- # qd=16 00:32:07.108 16:09:09 -- host/digest.sh@58 -- # bperfpid=71828 00:32:07.108 16:09:09 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:07.108 16:09:09 -- host/digest.sh@60 -- # waitforlisten 71828 /var/tmp/bperf.sock 00:32:07.108 16:09:09 -- common/autotest_common.sh@819 -- # '[' -z 71828 ']' 00:32:07.108 16:09:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:07.108 16:09:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:07.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:07.108 16:09:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:07.108 16:09:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:07.108 16:09:09 -- common/autotest_common.sh@10 -- # set +x 00:32:07.108 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:07.108 Zero copy mechanism will not be used. 00:32:07.108 [2024-07-22 16:09:09.780846] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:07.108 [2024-07-22 16:09:09.780930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71828 ] 00:32:07.108 [2024-07-22 16:09:09.914356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.371 [2024-07-22 16:09:09.973096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.941 16:09:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:07.941 16:09:10 -- common/autotest_common.sh@852 -- # return 0 00:32:07.941 16:09:10 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:07.941 16:09:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:08.200 16:09:11 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:08.200 16:09:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.200 16:09:11 -- common/autotest_common.sh@10 -- # set +x 00:32:08.200 16:09:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.200 16:09:11 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:08.200 16:09:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:08.768 nvme0n1 00:32:08.768 16:09:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:08.768 16:09:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.768 16:09:11 -- common/autotest_common.sh@10 -- # set +x 00:32:08.768 16:09:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.768 16:09:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:08.768 16:09:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:08.768 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:08.768 Zero copy mechanism will not be used. 00:32:08.768 Running I/O for 2 seconds... 00:32:08.768 [2024-07-22 16:09:11.466996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.768 [2024-07-22 16:09:11.467358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.768 [2024-07-22 16:09:11.467396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:08.768 [2024-07-22 16:09:11.472319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.768 [2024-07-22 16:09:11.472656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.768 [2024-07-22 16:09:11.472693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:08.768 [2024-07-22 16:09:11.477597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.768 [2024-07-22 16:09:11.477908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.768 [2024-07-22 16:09:11.477940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:08.768 [2024-07-22 16:09:11.482835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.768 [2024-07-22 16:09:11.483175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.768 [2024-07-22 16:09:11.483206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.768 [2024-07-22 16:09:11.488066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.768 [2024-07-22 16:09:11.488383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.488414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.493336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.493659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.493692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.498571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.498918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.498953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.503818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.504123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.504154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.508949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.509253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.509284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.514160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.514469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.514514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.519364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.519690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.519720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.524566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.524874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.524904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.529762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.530077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.530116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.535015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.535344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.535379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.540230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.540570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.540607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.545512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.545827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.545862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.550713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.551035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.551068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.556056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.556363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.556395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.561268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.561602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.561635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.566444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.566769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.566800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.571673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.572000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.572030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.576892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.577220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.577253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.582214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.582550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.582581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.587416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.587750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.587781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.592647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.592957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.592987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.597807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.598113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.598144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.603007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.603312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.603342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.608240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.608579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.608609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.613467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.613787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.613816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.618702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.619032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.769 [2024-07-22 16:09:11.619061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:08.769 [2024-07-22 16:09:11.623874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.769 [2024-07-22 16:09:11.624181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.770 [2024-07-22 16:09:11.624211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:08.770 [2024-07-22 16:09:11.629071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:08.770 [2024-07-22 16:09:11.629378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:08.770 [2024-07-22 16:09:11.629408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.634235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.634574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.634606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.639426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.639774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.639805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.644653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.644962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.644992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.649811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.650134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.650166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.655120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.655434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.655467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.660306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.660632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.660665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.665432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.665771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.665802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.670622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.670940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.670971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.675867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.676176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.676211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.681187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.681510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.681557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.686430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.686775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.686807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.691788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.692124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.692154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.696997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.697304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.697334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.702162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.702487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.702527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.707295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.707662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.707693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.712692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.713017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.713051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.718069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.718449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.718478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.723354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.723689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.723717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.030 [2024-07-22 16:09:11.728705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.030 [2024-07-22 16:09:11.729047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.030 [2024-07-22 16:09:11.729083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.734089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.734422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.734459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.739413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.739773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.739809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.744647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.744972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.745003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.749871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.750191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.750220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.755072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.755376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.755406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.760253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.760591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.760621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.765466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.765805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.765835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.770783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.771101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.771134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.776014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.776321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.776353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.781171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.781478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.781520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.786346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.786682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.786712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.791528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.791849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.791879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.796702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.797006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.797036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.801890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.802194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.802225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.807136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.807441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.807471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.812370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.812693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.812727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.817672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.817994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.818024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.822972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.823280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.823310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.828165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.828475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.828522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.833442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.833821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.833861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.838854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.839221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.839264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.844105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.844428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.844465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.849330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.849665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.849700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.854558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.854888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.854929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.859796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.860100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.860130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.864972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.865278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.031 [2024-07-22 16:09:11.865308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.031 [2024-07-22 16:09:11.870120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.031 [2024-07-22 16:09:11.870428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.032 [2024-07-22 16:09:11.870459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.032 [2024-07-22 16:09:11.875357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.032 [2024-07-22 16:09:11.875680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.032 [2024-07-22 16:09:11.875713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.032 [2024-07-22 16:09:11.880565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.032 [2024-07-22 16:09:11.880870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.032 [2024-07-22 16:09:11.880900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.032 [2024-07-22 16:09:11.885746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.032 [2024-07-22 16:09:11.886058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.032 [2024-07-22 16:09:11.886095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.032 [2024-07-22 16:09:11.890937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.032 [2024-07-22 16:09:11.891243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.032 [2024-07-22 16:09:11.891278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.896169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.896480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.896546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.901325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.901647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.901683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.906545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.906851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.906881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.911781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.912089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.912119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.917030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.917343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.917376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.922267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.922595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.922625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.927564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.927873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.927903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.932765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.933068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.933110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.937980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.938295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.938328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.943351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.943680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.943715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.948599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.291 [2024-07-22 16:09:11.948941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.291 [2024-07-22 16:09:11.948982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.291 [2024-07-22 16:09:11.953979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:11.954302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:11.954340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:11.959278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:11.959613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:11.959646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:11.964439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:11.964764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:11.964803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:11.969663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:11.969972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:11.969998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:11.974832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:11.975146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:11.975170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:11.980028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:11.980336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:11.980369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:11.985251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:11.985576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:11.985615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:11.990468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:11.990793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:11.990834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:11.995697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:11.996005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:11.996043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.000962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.001269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.001307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.006101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.006407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.006445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.011311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.011638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.011677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.016548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.016877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.016913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.021794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.022114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.022150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.027000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.027312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.027352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.032229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.032550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.032590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.037474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.037807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.037875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.042721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.043037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.043069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.047958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.048265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.048296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.053181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.053507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.053536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.058357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.058690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.058721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.063649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.063956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.063985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.068857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.069166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.069196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.074070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.074376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.074406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.079323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.079647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.079681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.084452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.084770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.084801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.089659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.292 [2024-07-22 16:09:12.089966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.292 [2024-07-22 16:09:12.089996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.292 [2024-07-22 16:09:12.094849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.095189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.095224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.100072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.100380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.100413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.105222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.105546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.105577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.110404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.110725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.110756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.115628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.115936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.115966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.120887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.121192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.121226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.126065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.126373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.126404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.131233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.131567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.131598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.136449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.136771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.136802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.141617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.141922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.141952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.146810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.147133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.147163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.293 [2024-07-22 16:09:12.152034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.293 [2024-07-22 16:09:12.152362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.293 [2024-07-22 16:09:12.152398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.157227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.157553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.157585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.162414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.162738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.162772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.167655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.167961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.167993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.172811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.173116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.173147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.177980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.178287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.178318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.183155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.183463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.183503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.188320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.188639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.188672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.193480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.193802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.193832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.198720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.199118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.199168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.204168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.204610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.204658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.209678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.210038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.210080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.215106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.215432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.215468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.220339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.220662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.220692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.225534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.225840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.225871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.230706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.231022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.231052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.235910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.236213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.236244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.241095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.241409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.241443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.246310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.246636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.246666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.251594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.553 [2024-07-22 16:09:12.251909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.553 [2024-07-22 16:09:12.251939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.553 [2024-07-22 16:09:12.256874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.257192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.257217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.262009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.262328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.262359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.267212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.267531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.267561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.272437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.272758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.272789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.277573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.277880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.277910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.282837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.283170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.283204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.288069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.288386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.288417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.293318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.293662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.293698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.298647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.298965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.298997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.303830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.304139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.304169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.309019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.309337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.309372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.314217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.314564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.314599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.319626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.319961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.319997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.324807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.325115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.325147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.330018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.330350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.330391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.336760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.337131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.337174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.343701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.344036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.344069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.350552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.350904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.350953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.357453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.357820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.357863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.364413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.364795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.364837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.371083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.371432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.371473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.378023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.378358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.378392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.384770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.385082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.385114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.390047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.390363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.390394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.395301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.395626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.395658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.400556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.400868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.400901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.554 [2024-07-22 16:09:12.405716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.554 [2024-07-22 16:09:12.406023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.554 [2024-07-22 16:09:12.406053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.555 [2024-07-22 16:09:12.410941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.555 [2024-07-22 16:09:12.411249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.555 [2024-07-22 16:09:12.411294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.416823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.417146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.417179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.423222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.423572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.423602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.429587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.429912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.429941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.435918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.436268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.436299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.442260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.442609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.442640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.448630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.448953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.448983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.454947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.455276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.455306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.461220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.461571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.461601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.467493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.467843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.467868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.473976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.474311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.474341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.480109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.480416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.480450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.485291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.485614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.485644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.490458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.490793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.490825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.495670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.495978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.496010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.500800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.501103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.501134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.505985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.506295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.506325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.511197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.511513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.511543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.516395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.814 [2024-07-22 16:09:12.516714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.814 [2024-07-22 16:09:12.516744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.814 [2024-07-22 16:09:12.521606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.521916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.521945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.526835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.527170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.527201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.532165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.532474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.532514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.537358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.537681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.537710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.542576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.542884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.542922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.547734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.548039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.548068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.552945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.553264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.553296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.558251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.558593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.558623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.563539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.563845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.563876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.568713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.569017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.569047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.573931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.574237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.574268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.579114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.579433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.579466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.584360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.584683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.584715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.589542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.589850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.589881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.594743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.595071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.595100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.600060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.600377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.600412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.605366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.605725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.605761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.610666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.610998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.611039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.615895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.616211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.616247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.621182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.621524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.621557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.626468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.626852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.626899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.631730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.632052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.632086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.636958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.637286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.637321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.642255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.642674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.642721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.647642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.647959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.647993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.652963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.653313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.653353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.658151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.815 [2024-07-22 16:09:12.658456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.815 [2024-07-22 16:09:12.658498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:09.815 [2024-07-22 16:09:12.663343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.816 [2024-07-22 16:09:12.663665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.816 [2024-07-22 16:09:12.663696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:09.816 [2024-07-22 16:09:12.668533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.816 [2024-07-22 16:09:12.668840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.816 [2024-07-22 16:09:12.668871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:09.816 [2024-07-22 16:09:12.673636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:09.816 [2024-07-22 16:09:12.673943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:09.816 [2024-07-22 16:09:12.673981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.678871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.679193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.679223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.684139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.684448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.684479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.689342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.689680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.689713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.694562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.694868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.694899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.699752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.700058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.700089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.704937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.705247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.705278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.710237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.710565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.710594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.715435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.715758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.715791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.720637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.720946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.720977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.725808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.726116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.726148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.731028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.731363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.731394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.736339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.736669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.736703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.741609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.741924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.741956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.746817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.747141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.747171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.752118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.076 [2024-07-22 16:09:12.752430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.076 [2024-07-22 16:09:12.752460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.076 [2024-07-22 16:09:12.757313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.757639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.757670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.762828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.763155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.763185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.768209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.768534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.768564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.773446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.773771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.773802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.778724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.779044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.779073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.783928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.784234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.784264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.789118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.789422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.789452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.794345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.794671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.794703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.799582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.799904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.799937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.804861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.805193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.805229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.810123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.810467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.810516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.815607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.816059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.816096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.821145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.821505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.821541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.826447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.826794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.826830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.831810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.832142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.832176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.837032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.837363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.837397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.842330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.842677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.842709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.847577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.847908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.847939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.852907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.853300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.853339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.858473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.858858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.858895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.863852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.864182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.864219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.869165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.869527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.869561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.874506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.874853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.874895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.879933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.880302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.880342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.885213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.885546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.885580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.890449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.890767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.077 [2024-07-22 16:09:12.890804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.077 [2024-07-22 16:09:12.895662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.077 [2024-07-22 16:09:12.895978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.078 [2024-07-22 16:09:12.896011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.078 [2024-07-22 16:09:12.900956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.078 [2024-07-22 16:09:12.901303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.078 [2024-07-22 16:09:12.901341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.078 [2024-07-22 16:09:12.906202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.078 [2024-07-22 16:09:12.906546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.078 [2024-07-22 16:09:12.906577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.078 [2024-07-22 16:09:12.911428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.078 [2024-07-22 16:09:12.911779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.078 [2024-07-22 16:09:12.911815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.078 [2024-07-22 16:09:12.916732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.078 [2024-07-22 16:09:12.917069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.078 [2024-07-22 16:09:12.917105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.078 [2024-07-22 16:09:12.922005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.078 [2024-07-22 16:09:12.922339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.078 [2024-07-22 16:09:12.922378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.078 [2024-07-22 16:09:12.927276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.078 [2024-07-22 16:09:12.927647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.078 [2024-07-22 16:09:12.927687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.078 [2024-07-22 16:09:12.932720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.078 [2024-07-22 16:09:12.933079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.078 [2024-07-22 16:09:12.933118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.938295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.938703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.938743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.943749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.944097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.944130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.949053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.949392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.949423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.954286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.954633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.954664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.959597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.959944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.959982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.964911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.965221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.965252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.970121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.970551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.970600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.975509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.975844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.975886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.980996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.981328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.981375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.342 [2024-07-22 16:09:12.986274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.342 [2024-07-22 16:09:12.986631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.342 [2024-07-22 16:09:12.986671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:12.991776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:12.992124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:12.992163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:12.997061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:12.997366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:12.997399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.002334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.002657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.002688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.007531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.007840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.007871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.012769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.013080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.013111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.018045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.018395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.018435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.023527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.023866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.023903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.028725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.029036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.029068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.034020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.034342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.034372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.039359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.039679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.039709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.044503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.044814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.044844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.049647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.049960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.049991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.055009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.055334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.055365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.060472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.060803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.060834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.065973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.066286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.066328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.071638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.071978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.072018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.077035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.077352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.077387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.082625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.082958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.082989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.088075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.088399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.088430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.093787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.094122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.094154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.099298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.099624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.099654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.104825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.105135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.105165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.110292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.110611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.110641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.115761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.116083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.116113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.121238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.121562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.121600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.126435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.126782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.126810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.343 [2024-07-22 16:09:13.131700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.343 [2024-07-22 16:09:13.132008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.343 [2024-07-22 16:09:13.132071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.137138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.137477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.137520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.142573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.142885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.142929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.147923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.148250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.148282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.153202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.153530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.153565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.158443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.158766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.158802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.163684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.163998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.164027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.168887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.169193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.169229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.174121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.174432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.174465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.179407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.179783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.179815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.184725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.185059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.185091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.189959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.190282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.190311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.195393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.195745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.195776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.344 [2024-07-22 16:09:13.200691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.344 [2024-07-22 16:09:13.201013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.344 [2024-07-22 16:09:13.201043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.604 [2024-07-22 16:09:13.206042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.604 [2024-07-22 16:09:13.206349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.604 [2024-07-22 16:09:13.206379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.604 [2024-07-22 16:09:13.211171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.604 [2024-07-22 16:09:13.211475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.604 [2024-07-22 16:09:13.211519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.604 [2024-07-22 16:09:13.216377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.604 [2024-07-22 16:09:13.216716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.604 [2024-07-22 16:09:13.216745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.604 [2024-07-22 16:09:13.221612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.604 [2024-07-22 16:09:13.221947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.604 [2024-07-22 16:09:13.221976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.604 [2024-07-22 16:09:13.226904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.604 [2024-07-22 16:09:13.227239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.604 [2024-07-22 16:09:13.227272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.604 [2024-07-22 16:09:13.232677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.604 [2024-07-22 16:09:13.232991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.604 [2024-07-22 16:09:13.233021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.604 [2024-07-22 16:09:13.238045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.604 [2024-07-22 16:09:13.238373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.604 [2024-07-22 16:09:13.238401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.604 [2024-07-22 16:09:13.243358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.604 [2024-07-22 16:09:13.243698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.243728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.248617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.248929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.248957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.253947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.254269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.254299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.259191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.259531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.259565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.264442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.264776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.264814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.269798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.270115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.270143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.275214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.275560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.275590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.280530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.280837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.280876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.285714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.286021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.286052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.290951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.291257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.291289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.296206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.296536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.296567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.301506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.301816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.301846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.306649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.306971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.306999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.311978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.312287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.312321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.317305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.317634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.317673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.322577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.322889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.322935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.327906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.328250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.328285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.333259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.333582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.333613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.338549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.338888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.338935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.343984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.344320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.344356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.349225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.349562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.349593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.354531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.354868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.354917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.359757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.360071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.360109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.365019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.365326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.365363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.370148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.370453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.370497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.375373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.375709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.375746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.380644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.380950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.605 [2024-07-22 16:09:13.380987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.605 [2024-07-22 16:09:13.386046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.605 [2024-07-22 16:09:13.386361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.386394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.391245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.391580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.391616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.396554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.396899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.396936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.401839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.402188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.402226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.407239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.407606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.407646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.412622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.412966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.413003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.417875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.418185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.418223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.423069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.423372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.423408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.428256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.428580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.428620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.433455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.433789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.433827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.438668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.439001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.439037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.443893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.444204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.444241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.449140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.449447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.449477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:10.606 [2024-07-22 16:09:13.454166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf86b60) with pdu=0x2000190fef90 00:32:10.606 [2024-07-22 16:09:13.454349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.606 [2024-07-22 16:09:13.454379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:10.606 00:32:10.606 Latency(us) 00:32:10.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.606 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:10.606 nvme0n1 : 2.00 5793.96 724.25 0.00 0.00 2755.31 2263.97 11558.17 00:32:10.606 =================================================================================================================== 00:32:10.606 Total : 5793.96 724.25 0.00 0.00 2755.31 2263.97 11558.17 00:32:10.606 0 00:32:10.867 16:09:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:10.867 16:09:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:10.867 | .driver_specific 00:32:10.867 | .nvme_error 00:32:10.867 | .status_code 00:32:10.867 | .command_transient_transport_error' 00:32:10.867 16:09:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:10.867 16:09:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:11.131 16:09:13 -- host/digest.sh@71 -- # (( 374 > 0 )) 00:32:11.131 16:09:13 -- host/digest.sh@73 -- # killprocess 71828 00:32:11.131 16:09:13 -- common/autotest_common.sh@926 -- # '[' -z 71828 ']' 00:32:11.131 16:09:13 -- common/autotest_common.sh@930 -- # kill -0 71828 00:32:11.131 16:09:13 -- common/autotest_common.sh@931 -- # uname 00:32:11.131 16:09:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:11.131 16:09:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71828 00:32:11.131 16:09:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:11.131 16:09:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:11.131 killing process with pid 71828 00:32:11.131 16:09:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71828' 00:32:11.131 Received shutdown signal, test time was about 2.000000 seconds 00:32:11.131 00:32:11.131 Latency(us) 00:32:11.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.131 =================================================================================================================== 00:32:11.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:11.131 16:09:13 -- common/autotest_common.sh@945 -- # kill 71828 00:32:11.131 16:09:13 -- common/autotest_common.sh@950 -- # wait 71828 00:32:11.131 16:09:13 -- host/digest.sh@115 -- # killprocess 71614 00:32:11.131 16:09:13 -- common/autotest_common.sh@926 -- # '[' -z 71614 ']' 00:32:11.131 16:09:13 -- common/autotest_common.sh@930 -- # kill -0 71614 00:32:11.131 16:09:13 -- common/autotest_common.sh@931 -- # uname 00:32:11.131 16:09:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:11.131 16:09:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71614 00:32:11.396 16:09:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:11.396 16:09:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:11.396 16:09:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71614' 00:32:11.396 killing process with pid 71614 00:32:11.396 16:09:14 -- common/autotest_common.sh@945 -- # kill 71614 00:32:11.396 16:09:14 -- common/autotest_common.sh@950 -- # wait 71614 00:32:11.396 00:32:11.396 real 0m18.331s 00:32:11.396 user 0m35.896s 00:32:11.396 sys 0m4.445s 00:32:11.396 16:09:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.396 16:09:14 -- common/autotest_common.sh@10 -- # set +x 00:32:11.396 ************************************ 00:32:11.396 END TEST nvmf_digest_error 00:32:11.396 ************************************ 00:32:11.396 16:09:14 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:32:11.396 16:09:14 -- host/digest.sh@139 -- # nvmftestfini 00:32:11.396 16:09:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:11.396 16:09:14 -- nvmf/common.sh@116 -- # sync 00:32:11.664 16:09:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:11.664 16:09:14 -- nvmf/common.sh@119 -- # set +e 00:32:11.664 16:09:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:11.664 16:09:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:11.664 rmmod nvme_tcp 00:32:11.664 rmmod nvme_fabrics 00:32:11.664 rmmod nvme_keyring 00:32:11.664 16:09:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:11.664 16:09:14 -- nvmf/common.sh@123 -- # set -e 00:32:11.664 16:09:14 -- nvmf/common.sh@124 -- # return 0 00:32:11.664 16:09:14 -- nvmf/common.sh@477 -- # '[' -n 71614 ']' 00:32:11.664 16:09:14 -- nvmf/common.sh@478 -- # killprocess 71614 00:32:11.664 16:09:14 -- common/autotest_common.sh@926 -- # '[' -z 71614 ']' 00:32:11.664 16:09:14 -- common/autotest_common.sh@930 -- # kill -0 71614 00:32:11.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71614) - No such process 00:32:11.664 16:09:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71614 is not found' 00:32:11.664 Process with pid 71614 is not found 00:32:11.664 16:09:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:11.664 16:09:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:11.664 16:09:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:11.664 16:09:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:11.664 16:09:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:11.664 16:09:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.664 16:09:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:11.664 16:09:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.664 16:09:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:32:11.664 00:32:11.664 real 0m35.288s 00:32:11.664 user 1m7.850s 00:32:11.664 sys 0m9.246s 00:32:11.664 16:09:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.664 16:09:14 -- common/autotest_common.sh@10 -- # set +x 00:32:11.664 ************************************ 00:32:11.664 END TEST nvmf_digest 00:32:11.664 ************************************ 00:32:11.664 16:09:14 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:32:11.664 16:09:14 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:32:11.664 16:09:14 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:32:11.664 16:09:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:11.664 16:09:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.664 16:09:14 -- common/autotest_common.sh@10 -- # set +x 00:32:11.664 ************************************ 00:32:11.664 START TEST nvmf_multipath 00:32:11.664 ************************************ 00:32:11.664 16:09:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:32:11.664 * Looking for test storage... 00:32:11.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:11.664 16:09:14 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:11.664 16:09:14 -- nvmf/common.sh@7 -- # uname -s 00:32:11.664 16:09:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.664 16:09:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.664 16:09:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.664 16:09:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.664 16:09:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.664 16:09:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.664 16:09:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.664 16:09:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.664 16:09:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.664 16:09:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.664 16:09:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:32:11.664 16:09:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:32:11.664 16:09:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.664 16:09:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.664 16:09:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:11.664 16:09:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:11.664 16:09:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.664 16:09:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.664 16:09:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.664 16:09:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.664 16:09:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.664 16:09:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.664 16:09:14 -- paths/export.sh@5 -- # export PATH 00:32:11.664 16:09:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.664 16:09:14 -- nvmf/common.sh@46 -- # : 0 00:32:11.664 16:09:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:11.664 16:09:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:11.664 16:09:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:11.664 16:09:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.664 16:09:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.664 16:09:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:11.664 16:09:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:11.664 16:09:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:11.664 16:09:14 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:11.664 16:09:14 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:11.664 16:09:14 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:11.664 16:09:14 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:32:11.664 16:09:14 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:11.664 16:09:14 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:11.664 16:09:14 -- host/multipath.sh@30 -- # nvmftestinit 00:32:11.664 16:09:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:11.664 16:09:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.664 16:09:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:11.664 16:09:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:11.664 16:09:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:11.665 16:09:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.665 16:09:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:11.665 16:09:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.665 16:09:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:32:11.665 16:09:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:32:11.665 16:09:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:32:11.665 16:09:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:32:11.665 16:09:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:32:11.665 16:09:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:32:11.665 16:09:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.665 16:09:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.665 16:09:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:11.665 16:09:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:32:11.665 16:09:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:11.665 16:09:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:11.665 16:09:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:11.665 16:09:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.665 16:09:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:11.665 16:09:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:11.665 16:09:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:11.665 16:09:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:11.665 16:09:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:32:11.665 16:09:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:32:11.933 Cannot find device "nvmf_tgt_br" 00:32:11.933 16:09:14 -- nvmf/common.sh@154 -- # true 00:32:11.933 16:09:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:32:11.933 Cannot find device "nvmf_tgt_br2" 00:32:11.934 16:09:14 -- nvmf/common.sh@155 -- # true 00:32:11.934 16:09:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:32:11.934 16:09:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:32:11.934 Cannot find device "nvmf_tgt_br" 00:32:11.934 16:09:14 -- nvmf/common.sh@157 -- # true 00:32:11.934 16:09:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:32:11.934 Cannot find device "nvmf_tgt_br2" 00:32:11.934 16:09:14 -- nvmf/common.sh@158 -- # true 00:32:11.934 16:09:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:32:11.934 16:09:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:32:11.934 16:09:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:11.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:11.934 16:09:14 -- nvmf/common.sh@161 -- # true 00:32:11.934 16:09:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:11.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:11.934 16:09:14 -- nvmf/common.sh@162 -- # true 00:32:11.934 16:09:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:32:11.934 16:09:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:11.934 16:09:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:11.934 16:09:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:11.934 16:09:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:11.934 16:09:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:11.934 16:09:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:11.934 16:09:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:11.934 16:09:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:11.934 16:09:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:32:11.934 16:09:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:32:11.934 16:09:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:32:11.934 16:09:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:32:11.934 16:09:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:11.934 16:09:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:11.934 16:09:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:11.934 16:09:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:32:11.934 16:09:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:32:11.934 16:09:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:32:11.934 16:09:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:12.204 16:09:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:12.204 16:09:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:12.204 16:09:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:12.204 16:09:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:32:12.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:32:12.204 00:32:12.204 --- 10.0.0.2 ping statistics --- 00:32:12.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.204 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:32:12.204 16:09:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:32:12.204 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:12.204 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:32:12.204 00:32:12.204 --- 10.0.0.3 ping statistics --- 00:32:12.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.205 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:32:12.205 16:09:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:12.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:32:12.205 00:32:12.205 --- 10.0.0.1 ping statistics --- 00:32:12.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.205 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:32:12.205 16:09:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.205 16:09:14 -- nvmf/common.sh@421 -- # return 0 00:32:12.205 16:09:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:12.205 16:09:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.205 16:09:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:12.205 16:09:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:12.205 16:09:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.205 16:09:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:12.205 16:09:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:12.205 16:09:14 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:32:12.205 16:09:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:12.205 16:09:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:12.205 16:09:14 -- common/autotest_common.sh@10 -- # set +x 00:32:12.205 16:09:14 -- nvmf/common.sh@469 -- # nvmfpid=72093 00:32:12.205 16:09:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:12.205 16:09:14 -- nvmf/common.sh@470 -- # waitforlisten 72093 00:32:12.205 16:09:14 -- common/autotest_common.sh@819 -- # '[' -z 72093 ']' 00:32:12.205 16:09:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.205 16:09:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:12.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.205 16:09:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.205 16:09:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:12.205 16:09:14 -- common/autotest_common.sh@10 -- # set +x 00:32:12.205 [2024-07-22 16:09:14.937351] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:12.205 [2024-07-22 16:09:14.937458] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.477 [2024-07-22 16:09:15.075005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:12.477 [2024-07-22 16:09:15.143990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:12.477 [2024-07-22 16:09:15.144168] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.477 [2024-07-22 16:09:15.144188] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.477 [2024-07-22 16:09:15.144199] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.477 [2024-07-22 16:09:15.144368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.477 [2024-07-22 16:09:15.144393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.422 16:09:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:13.422 16:09:15 -- common/autotest_common.sh@852 -- # return 0 00:32:13.422 16:09:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:13.422 16:09:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:13.422 16:09:15 -- common/autotest_common.sh@10 -- # set +x 00:32:13.422 16:09:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.422 16:09:15 -- host/multipath.sh@33 -- # nvmfapp_pid=72093 00:32:13.422 16:09:15 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:13.422 [2024-07-22 16:09:16.219660] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.422 16:09:16 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:13.680 Malloc0 00:32:13.680 16:09:16 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:13.939 16:09:16 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:14.197 16:09:17 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.455 [2024-07-22 16:09:17.292003] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.455 16:09:17 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:14.712 [2024-07-22 16:09:17.560144] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:15.032 16:09:17 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:15.032 16:09:17 -- host/multipath.sh@44 -- # bdevperf_pid=72149 00:32:15.032 16:09:17 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:15.032 16:09:17 -- host/multipath.sh@47 -- # waitforlisten 72149 /var/tmp/bdevperf.sock 00:32:15.032 16:09:17 -- common/autotest_common.sh@819 -- # '[' -z 72149 ']' 00:32:15.032 16:09:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:15.032 16:09:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:15.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:15.032 16:09:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:15.032 16:09:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:15.032 16:09:17 -- common/autotest_common.sh@10 -- # set +x 00:32:15.969 16:09:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:15.969 16:09:18 -- common/autotest_common.sh@852 -- # return 0 00:32:15.969 16:09:18 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:16.228 16:09:18 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:16.486 Nvme0n1 00:32:16.486 16:09:19 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:16.745 Nvme0n1 00:32:17.003 16:09:19 -- host/multipath.sh@78 -- # sleep 1 00:32:17.003 16:09:19 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:17.937 16:09:20 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:32:17.937 16:09:20 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:18.195 16:09:20 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:18.453 16:09:21 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:32:18.453 16:09:21 -- host/multipath.sh@65 -- # dtrace_pid=72194 00:32:18.453 16:09:21 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72093 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:18.453 16:09:21 -- host/multipath.sh@66 -- # sleep 6 00:32:25.021 16:09:27 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:25.021 16:09:27 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:32:25.021 16:09:27 -- host/multipath.sh@67 -- # active_port=4421 00:32:25.021 16:09:27 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:25.021 Attaching 4 probes... 00:32:25.021 @path[10.0.0.2, 4421]: 17966 00:32:25.021 @path[10.0.0.2, 4421]: 18288 00:32:25.021 @path[10.0.0.2, 4421]: 18280 00:32:25.021 @path[10.0.0.2, 4421]: 18280 00:32:25.021 @path[10.0.0.2, 4421]: 18249 00:32:25.021 16:09:27 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:25.021 16:09:27 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:25.021 16:09:27 -- host/multipath.sh@69 -- # sed -n 1p 00:32:25.021 16:09:27 -- host/multipath.sh@69 -- # port=4421 00:32:25.021 16:09:27 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:32:25.021 16:09:27 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:32:25.021 16:09:27 -- host/multipath.sh@72 -- # kill 72194 00:32:25.021 16:09:27 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:25.021 16:09:27 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:32:25.021 16:09:27 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:25.280 16:09:27 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:25.538 16:09:28 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:32:25.538 16:09:28 -- host/multipath.sh@65 -- # dtrace_pid=72312 00:32:25.538 16:09:28 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72093 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:25.538 16:09:28 -- host/multipath.sh@66 -- # sleep 6 00:32:32.133 16:09:34 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:32.133 16:09:34 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:32:32.133 16:09:34 -- host/multipath.sh@67 -- # active_port=4420 00:32:32.133 16:09:34 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:32.133 Attaching 4 probes... 00:32:32.133 @path[10.0.0.2, 4420]: 18234 00:32:32.133 @path[10.0.0.2, 4420]: 18516 00:32:32.133 @path[10.0.0.2, 4420]: 18525 00:32:32.133 @path[10.0.0.2, 4420]: 17946 00:32:32.133 @path[10.0.0.2, 4420]: 18222 00:32:32.133 16:09:34 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:32.133 16:09:34 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:32.133 16:09:34 -- host/multipath.sh@69 -- # sed -n 1p 00:32:32.133 16:09:34 -- host/multipath.sh@69 -- # port=4420 00:32:32.133 16:09:34 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:32:32.133 16:09:34 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:32:32.133 16:09:34 -- host/multipath.sh@72 -- # kill 72312 00:32:32.133 16:09:34 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:32.133 16:09:34 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:32:32.133 16:09:34 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:32.133 16:09:34 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:32.391 16:09:35 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:32:32.391 16:09:35 -- host/multipath.sh@65 -- # dtrace_pid=72430 00:32:32.391 16:09:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72093 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:32.391 16:09:35 -- host/multipath.sh@66 -- # sleep 6 00:32:38.982 16:09:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:38.982 16:09:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:32:38.982 16:09:41 -- host/multipath.sh@67 -- # active_port=4421 00:32:38.982 16:09:41 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:38.982 Attaching 4 probes... 00:32:38.982 @path[10.0.0.2, 4421]: 16242 00:32:38.982 @path[10.0.0.2, 4421]: 17753 00:32:38.982 @path[10.0.0.2, 4421]: 18056 00:32:38.982 @path[10.0.0.2, 4421]: 16377 00:32:38.982 @path[10.0.0.2, 4421]: 16757 00:32:38.982 16:09:41 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:38.982 16:09:41 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:38.982 16:09:41 -- host/multipath.sh@69 -- # sed -n 1p 00:32:38.982 16:09:41 -- host/multipath.sh@69 -- # port=4421 00:32:38.982 16:09:41 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:32:38.982 16:09:41 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:32:38.982 16:09:41 -- host/multipath.sh@72 -- # kill 72430 00:32:38.982 16:09:41 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:38.982 16:09:41 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:32:38.982 16:09:41 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:38.982 16:09:41 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:39.294 16:09:41 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:32:39.294 16:09:41 -- host/multipath.sh@65 -- # dtrace_pid=72548 00:32:39.294 16:09:41 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72093 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:39.294 16:09:41 -- host/multipath.sh@66 -- # sleep 6 00:32:45.851 16:09:47 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:45.851 16:09:47 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:32:45.851 16:09:48 -- host/multipath.sh@67 -- # active_port= 00:32:45.851 16:09:48 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:45.851 Attaching 4 probes... 00:32:45.851 00:32:45.851 00:32:45.851 00:32:45.851 00:32:45.851 00:32:45.851 16:09:48 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:45.851 16:09:48 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:45.851 16:09:48 -- host/multipath.sh@69 -- # sed -n 1p 00:32:45.851 16:09:48 -- host/multipath.sh@69 -- # port= 00:32:45.851 16:09:48 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:32:45.851 16:09:48 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:32:45.851 16:09:48 -- host/multipath.sh@72 -- # kill 72548 00:32:45.851 16:09:48 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:45.851 16:09:48 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:32:45.851 16:09:48 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:45.851 16:09:48 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:45.851 16:09:48 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:32:45.851 16:09:48 -- host/multipath.sh@65 -- # dtrace_pid=72659 00:32:45.851 16:09:48 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72093 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:32:45.851 16:09:48 -- host/multipath.sh@66 -- # sleep 6 00:32:52.407 16:09:54 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:32:52.407 16:09:54 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:32:52.407 16:09:55 -- host/multipath.sh@67 -- # active_port=4421 00:32:52.407 16:09:55 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:52.407 Attaching 4 probes... 00:32:52.407 @path[10.0.0.2, 4421]: 17021 00:32:52.407 @path[10.0.0.2, 4421]: 16700 00:32:52.407 @path[10.0.0.2, 4421]: 13253 00:32:52.407 @path[10.0.0.2, 4421]: 16751 00:32:52.407 @path[10.0.0.2, 4421]: 16314 00:32:52.407 16:09:55 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:32:52.407 16:09:55 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:32:52.407 16:09:55 -- host/multipath.sh@69 -- # sed -n 1p 00:32:52.407 16:09:55 -- host/multipath.sh@69 -- # port=4421 00:32:52.407 16:09:55 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:32:52.407 16:09:55 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:32:52.407 16:09:55 -- host/multipath.sh@72 -- # kill 72659 00:32:52.407 16:09:55 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:32:52.407 16:09:55 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:52.665 [2024-07-22 16:09:55.317256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.665 [2024-07-22 16:09:55.317433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 [2024-07-22 16:09:55.317896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2324ac0 is same with the state(5) to be set 00:32:52.666 16:09:55 -- host/multipath.sh@101 -- # sleep 1 00:32:53.602 16:09:56 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:32:53.602 16:09:56 -- host/multipath.sh@65 -- # dtrace_pid=72784 00:32:53.602 16:09:56 -- host/multipath.sh@66 -- # sleep 6 00:32:53.602 16:09:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72093 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:00.169 16:10:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:00.169 16:10:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:33:00.169 16:10:02 -- host/multipath.sh@67 -- # active_port=4420 00:33:00.169 16:10:02 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:00.169 Attaching 4 probes... 00:33:00.169 @path[10.0.0.2, 4420]: 16696 00:33:00.169 @path[10.0.0.2, 4420]: 17108 00:33:00.169 @path[10.0.0.2, 4420]: 17451 00:33:00.169 @path[10.0.0.2, 4420]: 17397 00:33:00.169 @path[10.0.0.2, 4420]: 16120 00:33:00.169 16:10:02 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:00.169 16:10:02 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:00.170 16:10:02 -- host/multipath.sh@69 -- # sed -n 1p 00:33:00.170 16:10:02 -- host/multipath.sh@69 -- # port=4420 00:33:00.170 16:10:02 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:33:00.170 16:10:02 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:33:00.170 16:10:02 -- host/multipath.sh@72 -- # kill 72784 00:33:00.170 16:10:02 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:00.170 16:10:02 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:00.170 [2024-07-22 16:10:03.010674] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:00.427 16:10:03 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:00.684 16:10:03 -- host/multipath.sh@111 -- # sleep 6 00:33:07.244 16:10:09 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:33:07.244 16:10:09 -- host/multipath.sh@65 -- # dtrace_pid=72964 00:33:07.244 16:10:09 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72093 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:07.244 16:10:09 -- host/multipath.sh@66 -- # sleep 6 00:33:13.815 16:10:15 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:13.815 16:10:15 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:13.815 16:10:15 -- host/multipath.sh@67 -- # active_port=4421 00:33:13.815 16:10:15 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:13.815 Attaching 4 probes... 00:33:13.815 @path[10.0.0.2, 4421]: 16878 00:33:13.815 @path[10.0.0.2, 4421]: 17203 00:33:13.815 @path[10.0.0.2, 4421]: 17768 00:33:13.815 @path[10.0.0.2, 4421]: 17744 00:33:13.815 @path[10.0.0.2, 4421]: 17664 00:33:13.815 16:10:15 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:13.815 16:10:15 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:13.815 16:10:15 -- host/multipath.sh@69 -- # sed -n 1p 00:33:13.815 16:10:15 -- host/multipath.sh@69 -- # port=4421 00:33:13.815 16:10:15 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:13.815 16:10:15 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:13.815 16:10:15 -- host/multipath.sh@72 -- # kill 72964 00:33:13.815 16:10:15 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:13.815 16:10:15 -- host/multipath.sh@114 -- # killprocess 72149 00:33:13.815 16:10:15 -- common/autotest_common.sh@926 -- # '[' -z 72149 ']' 00:33:13.815 16:10:15 -- common/autotest_common.sh@930 -- # kill -0 72149 00:33:13.815 16:10:15 -- common/autotest_common.sh@931 -- # uname 00:33:13.815 16:10:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:13.815 16:10:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72149 00:33:13.815 killing process with pid 72149 00:33:13.815 16:10:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:33:13.815 16:10:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:33:13.815 16:10:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72149' 00:33:13.815 16:10:15 -- common/autotest_common.sh@945 -- # kill 72149 00:33:13.815 16:10:15 -- common/autotest_common.sh@950 -- # wait 72149 00:33:13.815 Connection closed with partial response: 00:33:13.815 00:33:13.815 00:33:13.815 16:10:15 -- host/multipath.sh@116 -- # wait 72149 00:33:13.815 16:10:15 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:13.815 [2024-07-22 16:09:17.618775] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:13.815 [2024-07-22 16:09:17.618886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72149 ] 00:33:13.815 [2024-07-22 16:09:17.752819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.815 [2024-07-22 16:09:17.819863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:13.815 Running I/O for 90 seconds... 00:33:13.815 [2024-07-22 16:09:28.173034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.815 [2024-07-22 16:09:28.173122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.815 [2024-07-22 16:09:28.173356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.815 [2024-07-22 16:09:28.173430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.815 [2024-07-22 16:09:28.173801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.815 [2024-07-22 16:09:28.173946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:13.815 [2024-07-22 16:09:28.173967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.173982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.174018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.174066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.174102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.174175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.174219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.174578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.174688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.174916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.174967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.174989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.175012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.175051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.175088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.175125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.175162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.175204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.816 [2024-07-22 16:09:28.175241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.175278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.175315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.175351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.175388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.816 [2024-07-22 16:09:28.175424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:13.816 [2024-07-22 16:09:28.175446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.175475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.175530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.175567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.175604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.175640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.175676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.175718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.175757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.175794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.175831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.175867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.175904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.175941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.175971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.175987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.176170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.176245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.176770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.817 [2024-07-22 16:09:28.176926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.176971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:13.817 [2024-07-22 16:09:28.176994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.817 [2024-07-22 16:09:28.177012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.177035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.177051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.177072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.177087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.177109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.177124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.177145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.177161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.177182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.177197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.177219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.177235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.177256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.177272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.177294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.177309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.179189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.179239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.179394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.179469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.179526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.179836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.179874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.179934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:28.179974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:28.179995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:28.180011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:34.739359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:34.739436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:34.739514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.818 [2024-07-22 16:09:34.739538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:34.739562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:34.739579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:34.739601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:34.739616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:34.739638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:34.739653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:34.739706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:34.739723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:34.739745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:34.739759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:34.739780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:34.739795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:13.818 [2024-07-22 16:09:34.739816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.818 [2024-07-22 16:09:34.739830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.739852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.739866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.739887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.739902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.739923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.739938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.739959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.739974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.739995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.740937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.740974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.740995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.819 [2024-07-22 16:09:34.741011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:13.819 [2024-07-22 16:09:34.741032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.819 [2024-07-22 16:09:34.741047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.741084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.741192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.741269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.741305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.741740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.741813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.741981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.741996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.742033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.742069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.742107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.742172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.742211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.742248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.742285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.820 [2024-07-22 16:09:34.742322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.742359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.742396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.742433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.742470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.742522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:13.820 [2024-07-22 16:09:34.742544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.820 [2024-07-22 16:09:34.742559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.742951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.742972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.742987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.743024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.743060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.743104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.743142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.743178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.743214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.743255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.743292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.743329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.743365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.743402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.743439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.743476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.743526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.743548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.743563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.744695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.744725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.744762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.744780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.744810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.744825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.744854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.744870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.744900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.744915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.744945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.744960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.744989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.745005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.745034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.745051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.745080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.745096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.745125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.745141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.745170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.821 [2024-07-22 16:09:34.745185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.745215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.821 [2024-07-22 16:09:34.745231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:13.821 [2024-07-22 16:09:34.745274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:34.745290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:34.745336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:34.745381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:34.745426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:34.745470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:34.745530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:34.745575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:34.745620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:34.745665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:34.745732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:34.745763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:34.745780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.876210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.876315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.876358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.876469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.876561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.876634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.876671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.876742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.876962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.876989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.877019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.877036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.877058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.877074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.877096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.877111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.877410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.877435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.877460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.877476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.877514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.822 [2024-07-22 16:09:41.877532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.877554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.877569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:13.822 [2024-07-22 16:09:41.877603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.822 [2024-07-22 16:09:41.877620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.877658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.877695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.877733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.877771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.877808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.877846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.877883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.877921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.877958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.877993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.878694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.878950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.878971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.879006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.879024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.879046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.879061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.879084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.823 [2024-07-22 16:09:41.879110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.879133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.879149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.879171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.823 [2024-07-22 16:09:41.879188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:13.823 [2024-07-22 16:09:41.879210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.824 [2024-07-22 16:09:41.879225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.824 [2024-07-22 16:09:41.879263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.824 [2024-07-22 16:09:41.879853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.824 [2024-07-22 16:09:41.879890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.879950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.879968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.880021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.824 [2024-07-22 16:09:41.880059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.824 [2024-07-22 16:09:41.880111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.880150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.824 [2024-07-22 16:09:41.880186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.880223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.880261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.880298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.880335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.880372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.824 [2024-07-22 16:09:41.880408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:13.824 [2024-07-22 16:09:41.880430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.880447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.881443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.881513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.881573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.881620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.881665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.881711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.881757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.881802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.881847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.881892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.881938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.881967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.881983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.882028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.882073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.882127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.882198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.882245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.882290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:41.882336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.882381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.882427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:41.882457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.825 [2024-07-22 16:09:41.882472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.317969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.825 [2024-07-22 16:09:55.318460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.825 [2024-07-22 16:09:55.318473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.318978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.318994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.826 [2024-07-22 16:09:55.319008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.826 [2024-07-22 16:09:55.319094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.826 [2024-07-22 16:09:55.319150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.826 [2024-07-22 16:09:55.319183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.826 [2024-07-22 16:09:55.319590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.826 [2024-07-22 16:09:55.319618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.826 [2024-07-22 16:09:55.319647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.826 [2024-07-22 16:09:55.319662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.319675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.319712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.319740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.319769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.319797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.319826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.319855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.319883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.319911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.319939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.319967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.319982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.319996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.320024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.320115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.320401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.320464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.320563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.320591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.320619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.827 [2024-07-22 16:09:55.320705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.827 [2024-07-22 16:09:55.320813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.827 [2024-07-22 16:09:55.320828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.320843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.320856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.320871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.828 [2024-07-22 16:09:55.320884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.320899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.320912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.320927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.320941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.320956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.320969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.320984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.320997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.828 [2024-07-22 16:09:55.321232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.828 [2024-07-22 16:09:55.321261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.828 [2024-07-22 16:09:55.321346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.828 [2024-07-22 16:09:55.321431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.828 [2024-07-22 16:09:55.321528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:00 16:10:15 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:13.828 00 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.828 [2024-07-22 16:09:55.321563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.828 [2024-07-22 16:09:55.321620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:13.828 [2024-07-22 16:09:55.321676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.828 [2024-07-22 16:09:55.321736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.828 [2024-07-22 16:09:55.321751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.829 [2024-07-22 16:09:55.321765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.829 [2024-07-22 16:09:55.321781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.829 [2024-07-22 16:09:55.321794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.829 [2024-07-22 16:09:55.321808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443430 is same with the state(5) to be set 00:33:13.829 [2024-07-22 16:09:55.321826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:13.829 [2024-07-22 16:09:55.321836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:13.829 [2024-07-22 16:09:55.321847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:33:13.829 [2024-07-22 16:09:55.321860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:13.829 [2024-07-22 16:09:55.321913] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1443430 was disconnected and freed. reset controller. 00:33:13.829 [2024-07-22 16:09:55.323051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.829 [2024-07-22 16:09:55.323145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616980 (9): Bad file descriptor 00:33:13.829 [2024-07-22 16:09:55.323479] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.829 [2024-07-22 16:09:55.323586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.829 [2024-07-22 16:09:55.323641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.829 [2024-07-22 16:09:55.323665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616980 with addr=10.0.0.2, port=4421 00:33:13.829 [2024-07-22 16:09:55.323680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616980 is same with the state(5) to be set 00:33:13.829 [2024-07-22 16:09:55.323715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616980 (9): Bad file descriptor 00:33:13.829 [2024-07-22 16:09:55.323747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.829 [2024-07-22 16:09:55.323762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.829 [2024-07-22 16:09:55.323776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.829 [2024-07-22 16:09:55.323809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.829 [2024-07-22 16:09:55.323826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.829 [2024-07-22 16:10:05.370258] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:13.829 Received shutdown signal, test time was about 56.035584 seconds 00:33:13.829 00:33:13.829 Latency(us) 00:33:13.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.829 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:13.829 Verification LBA range: start 0x0 length 0x4000 00:33:13.829 Nvme0n1 : 56.03 10003.53 39.08 0.00 0.00 12774.60 193.63 7015926.69 00:33:13.829 =================================================================================================================== 00:33:13.829 Total : 10003.53 39.08 0.00 0.00 12774.60 193.63 7015926.69 00:33:13.829 16:10:16 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:33:13.829 16:10:16 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:13.829 16:10:16 -- host/multipath.sh@125 -- # nvmftestfini 00:33:13.829 16:10:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:13.829 16:10:16 -- nvmf/common.sh@116 -- # sync 00:33:13.829 16:10:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:13.829 16:10:16 -- nvmf/common.sh@119 -- # set +e 00:33:13.829 16:10:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:13.829 16:10:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:13.829 rmmod nvme_tcp 00:33:13.829 rmmod nvme_fabrics 00:33:13.829 rmmod nvme_keyring 00:33:13.829 16:10:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:13.829 16:10:16 -- nvmf/common.sh@123 -- # set -e 00:33:13.829 16:10:16 -- nvmf/common.sh@124 -- # return 0 00:33:13.829 16:10:16 -- nvmf/common.sh@477 -- # '[' -n 72093 ']' 00:33:13.829 16:10:16 -- nvmf/common.sh@478 -- # killprocess 72093 00:33:13.829 16:10:16 -- common/autotest_common.sh@926 -- # '[' -z 72093 ']' 00:33:13.829 16:10:16 -- common/autotest_common.sh@930 -- # kill -0 72093 00:33:13.829 16:10:16 -- common/autotest_common.sh@931 -- # uname 00:33:13.829 16:10:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:13.829 16:10:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72093 00:33:13.829 16:10:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:13.829 killing process with pid 72093 00:33:13.829 16:10:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:13.829 16:10:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72093' 00:33:13.829 16:10:16 -- common/autotest_common.sh@945 -- # kill 72093 00:33:13.829 16:10:16 -- common/autotest_common.sh@950 -- # wait 72093 00:33:13.829 16:10:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:13.829 16:10:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:13.829 16:10:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:13.829 16:10:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:13.829 16:10:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:13.829 16:10:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.829 16:10:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:13.829 16:10:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.829 16:10:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:33:13.829 00:33:13.829 real 1m2.160s 00:33:13.829 user 2m52.393s 00:33:13.829 sys 0m19.206s 00:33:13.829 16:10:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:13.829 ************************************ 00:33:13.829 END TEST nvmf_multipath 00:33:13.829 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:33:13.829 ************************************ 00:33:13.829 16:10:16 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:33:13.829 16:10:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:13.829 16:10:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:13.829 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:33:13.829 ************************************ 00:33:13.829 START TEST nvmf_timeout 00:33:13.829 ************************************ 00:33:13.829 16:10:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:33:14.087 * Looking for test storage... 00:33:14.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:14.087 16:10:16 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:14.087 16:10:16 -- nvmf/common.sh@7 -- # uname -s 00:33:14.087 16:10:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.087 16:10:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.087 16:10:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.087 16:10:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.087 16:10:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.087 16:10:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.087 16:10:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.087 16:10:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.087 16:10:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.087 16:10:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.087 16:10:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:33:14.087 16:10:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:33:14.087 16:10:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.087 16:10:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.087 16:10:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:14.087 16:10:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:14.087 16:10:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.087 16:10:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.087 16:10:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.087 16:10:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.087 16:10:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.088 16:10:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.088 16:10:16 -- paths/export.sh@5 -- # export PATH 00:33:14.088 16:10:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.088 16:10:16 -- nvmf/common.sh@46 -- # : 0 00:33:14.088 16:10:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:14.088 16:10:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:14.088 16:10:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:14.088 16:10:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.088 16:10:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.088 16:10:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:14.088 16:10:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:14.088 16:10:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:14.088 16:10:16 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:14.088 16:10:16 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:14.088 16:10:16 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:14.088 16:10:16 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:33:14.088 16:10:16 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:14.088 16:10:16 -- host/timeout.sh@19 -- # nvmftestinit 00:33:14.088 16:10:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:14.088 16:10:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.088 16:10:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:14.088 16:10:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:14.088 16:10:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:14.088 16:10:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.088 16:10:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:14.088 16:10:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.088 16:10:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:33:14.088 16:10:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:33:14.088 16:10:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:33:14.088 16:10:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:33:14.088 16:10:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:33:14.088 16:10:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:33:14.088 16:10:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.088 16:10:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.088 16:10:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:14.088 16:10:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:33:14.088 16:10:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:14.088 16:10:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:14.088 16:10:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:14.088 16:10:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.088 16:10:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:14.088 16:10:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:14.088 16:10:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:14.088 16:10:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:14.088 16:10:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:33:14.088 16:10:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:33:14.088 Cannot find device "nvmf_tgt_br" 00:33:14.088 16:10:16 -- nvmf/common.sh@154 -- # true 00:33:14.088 16:10:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:33:14.088 Cannot find device "nvmf_tgt_br2" 00:33:14.088 16:10:16 -- nvmf/common.sh@155 -- # true 00:33:14.088 16:10:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:33:14.088 16:10:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:33:14.088 Cannot find device "nvmf_tgt_br" 00:33:14.088 16:10:16 -- nvmf/common.sh@157 -- # true 00:33:14.088 16:10:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:33:14.088 Cannot find device "nvmf_tgt_br2" 00:33:14.088 16:10:16 -- nvmf/common.sh@158 -- # true 00:33:14.088 16:10:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:33:14.088 16:10:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:33:14.088 16:10:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:14.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:14.088 16:10:16 -- nvmf/common.sh@161 -- # true 00:33:14.088 16:10:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:14.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:14.088 16:10:16 -- nvmf/common.sh@162 -- # true 00:33:14.088 16:10:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:33:14.088 16:10:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:14.088 16:10:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:14.088 16:10:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:14.088 16:10:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:14.088 16:10:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:14.088 16:10:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:14.088 16:10:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:14.088 16:10:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:14.088 16:10:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:33:14.088 16:10:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:33:14.088 16:10:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:33:14.088 16:10:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:33:14.088 16:10:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:14.088 16:10:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:14.346 16:10:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:14.346 16:10:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:33:14.346 16:10:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:33:14.346 16:10:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:33:14.346 16:10:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:14.346 16:10:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:14.346 16:10:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:14.346 16:10:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:14.346 16:10:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:33:14.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:33:14.346 00:33:14.346 --- 10.0.0.2 ping statistics --- 00:33:14.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.346 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:33:14.346 16:10:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:33:14.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:14.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:33:14.346 00:33:14.346 --- 10.0.0.3 ping statistics --- 00:33:14.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.346 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:33:14.346 16:10:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:14.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:33:14.346 00:33:14.346 --- 10.0.0.1 ping statistics --- 00:33:14.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.346 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:33:14.346 16:10:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.346 16:10:17 -- nvmf/common.sh@421 -- # return 0 00:33:14.346 16:10:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:14.346 16:10:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.346 16:10:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:14.346 16:10:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:14.346 16:10:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.346 16:10:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:14.346 16:10:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:14.346 16:10:17 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:33:14.346 16:10:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:14.346 16:10:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:14.346 16:10:17 -- common/autotest_common.sh@10 -- # set +x 00:33:14.346 16:10:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:14.346 16:10:17 -- nvmf/common.sh@469 -- # nvmfpid=73266 00:33:14.346 16:10:17 -- nvmf/common.sh@470 -- # waitforlisten 73266 00:33:14.346 16:10:17 -- common/autotest_common.sh@819 -- # '[' -z 73266 ']' 00:33:14.346 16:10:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.346 16:10:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:14.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.346 16:10:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.346 16:10:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:14.346 16:10:17 -- common/autotest_common.sh@10 -- # set +x 00:33:14.347 [2024-07-22 16:10:17.121999] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:14.347 [2024-07-22 16:10:17.122104] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.605 [2024-07-22 16:10:17.260794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:14.605 [2024-07-22 16:10:17.328565] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:14.605 [2024-07-22 16:10:17.328728] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.605 [2024-07-22 16:10:17.328745] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.605 [2024-07-22 16:10:17.328756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.605 [2024-07-22 16:10:17.328863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.605 [2024-07-22 16:10:17.328877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.559 16:10:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:15.559 16:10:18 -- common/autotest_common.sh@852 -- # return 0 00:33:15.559 16:10:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:15.559 16:10:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:15.559 16:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:15.559 16:10:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.559 16:10:18 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:15.559 16:10:18 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:15.559 [2024-07-22 16:10:18.383639] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.559 16:10:18 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:16.125 Malloc0 00:33:16.126 16:10:18 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:16.383 16:10:19 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:16.641 16:10:19 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.899 [2024-07-22 16:10:19.614941] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.899 16:10:19 -- host/timeout.sh@32 -- # bdevperf_pid=73325 00:33:16.899 16:10:19 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:33:16.899 16:10:19 -- host/timeout.sh@34 -- # waitforlisten 73325 /var/tmp/bdevperf.sock 00:33:16.899 16:10:19 -- common/autotest_common.sh@819 -- # '[' -z 73325 ']' 00:33:16.899 16:10:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:16.899 16:10:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:16.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:16.899 16:10:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:16.899 16:10:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:16.899 16:10:19 -- common/autotest_common.sh@10 -- # set +x 00:33:16.899 [2024-07-22 16:10:19.680780] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:16.899 [2024-07-22 16:10:19.680882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73325 ] 00:33:17.157 [2024-07-22 16:10:19.828810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.157 [2024-07-22 16:10:19.907580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:18.091 16:10:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:18.091 16:10:20 -- common/autotest_common.sh@852 -- # return 0 00:33:18.091 16:10:20 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:18.350 16:10:21 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:33:18.608 NVMe0n1 00:33:18.608 16:10:21 -- host/timeout.sh@51 -- # rpc_pid=73344 00:33:18.608 16:10:21 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:18.608 16:10:21 -- host/timeout.sh@53 -- # sleep 1 00:33:18.867 Running I/O for 10 seconds... 00:33:19.802 16:10:22 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.063 [2024-07-22 16:10:22.678636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.063 [2024-07-22 16:10:22.678781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.064 [2024-07-22 16:10:22.678790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.064 [2024-07-22 16:10:22.678798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2d20 is same with the state(5) to be set 00:33:20.064 [2024-07-22 16:10:22.678862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.678911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.678936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.678947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.678959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.678969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.678980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.678989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.064 [2024-07-22 16:10:22.679235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.064 [2024-07-22 16:10:22.679258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.064 [2024-07-22 16:10:22.679279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.064 [2024-07-22 16:10:22.679299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.064 [2024-07-22 16:10:22.679341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.064 [2024-07-22 16:10:22.679362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.064 [2024-07-22 16:10:22.679403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.064 [2024-07-22 16:10:22.679430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.064 [2024-07-22 16:10:22.679450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.064 [2024-07-22 16:10:22.679704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.064 [2024-07-22 16:10:22.679713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.679794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.679815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.679835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.679856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.679990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.679999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.680089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.680131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.680153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.680234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.680276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.065 [2024-07-22 16:10:22.680297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.065 [2024-07-22 16:10:22.680539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.065 [2024-07-22 16:10:22.680551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.680714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.680796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.680881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.680901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.680923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.680943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.680964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.680984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.680995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.681110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.681130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.681194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.681235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.681276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.066 [2024-07-22 16:10:22.681357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.066 [2024-07-22 16:10:22.681368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.066 [2024-07-22 16:10:22.681388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.067 [2024-07-22 16:10:22.681408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.067 [2024-07-22 16:10:22.681431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.067 [2024-07-22 16:10:22.681452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.067 [2024-07-22 16:10:22.681473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.067 [2024-07-22 16:10:22.681504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.067 [2024-07-22 16:10:22.681527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.067 [2024-07-22 16:10:22.681548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.067 [2024-07-22 16:10:22.681568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.067 [2024-07-22 16:10:22.681589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.067 [2024-07-22 16:10:22.681609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7200 is same with the state(5) to be set 00:33:20.067 [2024-07-22 16:10:22.681633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.067 [2024-07-22 16:10:22.681641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.067 [2024-07-22 16:10:22.681649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121632 len:8 PRP1 0x0 PRP2 0x0 00:33:20.067 [2024-07-22 16:10:22.681658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.067 [2024-07-22 16:10:22.681703] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9c7200 was disconnected and freed. reset controller. 00:33:20.067 [2024-07-22 16:10:22.681957] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.067 [2024-07-22 16:10:22.682036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x984000 (9): Bad file descriptor 00:33:20.067 [2024-07-22 16:10:22.682139] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.067 [2024-07-22 16:10:22.682221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.067 [2024-07-22 16:10:22.682268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.067 [2024-07-22 16:10:22.682285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x984000 with addr=10.0.0.2, port=4420 00:33:20.067 [2024-07-22 16:10:22.682296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x984000 is same with the state(5) to be set 00:33:20.067 [2024-07-22 16:10:22.682316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x984000 (9): Bad file descriptor 00:33:20.067 [2024-07-22 16:10:22.682333] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.067 [2024-07-22 16:10:22.682345] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.067 [2024-07-22 16:10:22.682355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.067 [2024-07-22 16:10:22.682376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.067 [2024-07-22 16:10:22.682386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.067 16:10:22 -- host/timeout.sh@56 -- # sleep 2 00:33:21.967 [2024-07-22 16:10:24.682540] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-07-22 16:10:24.682645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-07-22 16:10:24.682694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.967 [2024-07-22 16:10:24.682712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x984000 with addr=10.0.0.2, port=4420 00:33:21.967 [2024-07-22 16:10:24.682726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x984000 is same with the state(5) to be set 00:33:21.967 [2024-07-22 16:10:24.682754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x984000 (9): Bad file descriptor 00:33:21.967 [2024-07-22 16:10:24.682786] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.967 [2024-07-22 16:10:24.682798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.967 [2024-07-22 16:10:24.682809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.967 [2024-07-22 16:10:24.682838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.967 [2024-07-22 16:10:24.682849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.967 16:10:24 -- host/timeout.sh@57 -- # get_controller 00:33:21.967 16:10:24 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:21.967 16:10:24 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:33:22.532 16:10:25 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:33:22.532 16:10:25 -- host/timeout.sh@58 -- # get_bdev 00:33:22.532 16:10:25 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:33:22.532 16:10:25 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:33:22.532 16:10:25 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:33:22.532 16:10:25 -- host/timeout.sh@61 -- # sleep 5 00:33:23.905 [2024-07-22 16:10:26.683054] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.905 [2024-07-22 16:10:26.683144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.905 [2024-07-22 16:10:26.683191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.905 [2024-07-22 16:10:26.683209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x984000 with addr=10.0.0.2, port=4420 00:33:23.905 [2024-07-22 16:10:26.683223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x984000 is same with the state(5) to be set 00:33:23.905 [2024-07-22 16:10:26.683249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x984000 (9): Bad file descriptor 00:33:23.905 [2024-07-22 16:10:26.683270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.905 [2024-07-22 16:10:26.683280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.905 [2024-07-22 16:10:26.683291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.905 [2024-07-22 16:10:26.683319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.905 [2024-07-22 16:10:26.683330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:26.436 [2024-07-22 16:10:28.683406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:26.436 [2024-07-22 16:10:28.683472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:26.436 [2024-07-22 16:10:28.683494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:26.436 [2024-07-22 16:10:28.683507] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:33:26.436 [2024-07-22 16:10:28.683535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:27.002 00:33:27.002 Latency(us) 00:33:27.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.002 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:27.002 Verification LBA range: start 0x0 length 0x4000 00:33:27.002 NVMe0n1 : 8.21 1846.43 7.21 15.60 0.00 68652.50 3410.85 7015926.69 00:33:27.002 =================================================================================================================== 00:33:27.002 Total : 1846.43 7.21 15.60 0.00 68652.50 3410.85 7015926.69 00:33:27.002 0 00:33:27.568 16:10:30 -- host/timeout.sh@62 -- # get_controller 00:33:27.568 16:10:30 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:27.568 16:10:30 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:33:27.827 16:10:30 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:33:27.827 16:10:30 -- host/timeout.sh@63 -- # get_bdev 00:33:27.827 16:10:30 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:33:27.827 16:10:30 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:33:28.115 16:10:30 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:33:28.115 16:10:30 -- host/timeout.sh@65 -- # wait 73344 00:33:28.115 16:10:30 -- host/timeout.sh@67 -- # killprocess 73325 00:33:28.115 16:10:30 -- common/autotest_common.sh@926 -- # '[' -z 73325 ']' 00:33:28.115 16:10:30 -- common/autotest_common.sh@930 -- # kill -0 73325 00:33:28.115 16:10:30 -- common/autotest_common.sh@931 -- # uname 00:33:28.115 16:10:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:28.115 16:10:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73325 00:33:28.115 killing process with pid 73325 00:33:28.115 Received shutdown signal, test time was about 9.411107 seconds 00:33:28.115 00:33:28.115 Latency(us) 00:33:28.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.116 =================================================================================================================== 00:33:28.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.116 16:10:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:33:28.116 16:10:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:33:28.116 16:10:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73325' 00:33:28.116 16:10:30 -- common/autotest_common.sh@945 -- # kill 73325 00:33:28.116 16:10:30 -- common/autotest_common.sh@950 -- # wait 73325 00:33:28.373 16:10:31 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:28.630 [2024-07-22 16:10:31.287682] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:28.630 16:10:31 -- host/timeout.sh@74 -- # bdevperf_pid=73466 00:33:28.630 16:10:31 -- host/timeout.sh@76 -- # waitforlisten 73466 /var/tmp/bdevperf.sock 00:33:28.630 16:10:31 -- common/autotest_common.sh@819 -- # '[' -z 73466 ']' 00:33:28.630 16:10:31 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:33:28.630 16:10:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:28.630 16:10:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:28.630 16:10:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:28.630 16:10:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:28.630 16:10:31 -- common/autotest_common.sh@10 -- # set +x 00:33:28.630 [2024-07-22 16:10:31.354857] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:28.630 [2024-07-22 16:10:31.354959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73466 ] 00:33:28.630 [2024-07-22 16:10:31.487661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.888 [2024-07-22 16:10:31.546722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:29.821 16:10:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:29.821 16:10:32 -- common/autotest_common.sh@852 -- # return 0 00:33:29.821 16:10:32 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:29.821 16:10:32 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:33:30.079 NVMe0n1 00:33:30.079 16:10:32 -- host/timeout.sh@84 -- # rpc_pid=73494 00:33:30.079 16:10:32 -- host/timeout.sh@86 -- # sleep 1 00:33:30.079 16:10:32 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:30.340 Running I/O for 10 seconds... 00:33:31.277 16:10:33 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.277 [2024-07-22 16:10:34.082855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.082949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.082971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.082988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.083004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.083017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.083032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.083047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.083060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.083076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.277 [2024-07-22 16:10:34.083090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e55f0 is same with the state(5) to be set 00:33:31.278 [2024-07-22 16:10:34.083327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.278 [2024-07-22 16:10:34.083907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.278 [2024-07-22 16:10:34.083927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.278 [2024-07-22 16:10:34.083948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.083988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.083999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.084009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.084020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.084030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.084041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.084050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.084062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.084071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.084082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.084091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.084102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.084112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.278 [2024-07-22 16:10:34.084124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.278 [2024-07-22 16:10:34.084132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.084947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.279 [2024-07-22 16:10:34.084987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.279 [2024-07-22 16:10:34.084998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.279 [2024-07-22 16:10:34.085008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.280 [2024-07-22 16:10:34.085801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.280 [2024-07-22 16:10:34.085842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.280 [2024-07-22 16:10:34.085853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.281 [2024-07-22 16:10:34.085863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.085874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.085884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.085895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.085904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.085915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.085924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.085935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.281 [2024-07-22 16:10:34.085944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.085956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.281 [2024-07-22 16:10:34.085967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.085978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:31.281 [2024-07-22 16:10:34.085987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.085998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.086007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.086027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.086047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.086067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.086087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.086107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.281 [2024-07-22 16:10:34.086128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f3320 is same with the state(5) to be set 00:33:31.281 [2024-07-22 16:10:34.086151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:31.281 [2024-07-22 16:10:34.086159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:31.281 [2024-07-22 16:10:34.086168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111192 len:8 PRP1 0x0 PRP2 0x0 00:33:31.281 [2024-07-22 16:10:34.086177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086226] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11f3320 was disconnected and freed. reset controller. 00:33:31.281 [2024-07-22 16:10:34.086332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:31.281 [2024-07-22 16:10:34.086351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:31.281 [2024-07-22 16:10:34.086372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:31.281 [2024-07-22 16:10:34.086391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:31.281 [2024-07-22 16:10:34.086410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:31.281 [2024-07-22 16:10:34.086419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0000 is same with the state(5) to be set 00:33:31.281 [2024-07-22 16:10:34.086664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.281 [2024-07-22 16:10:34.086691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b0000 (9): Bad file descriptor 00:33:31.281 [2024-07-22 16:10:34.086794] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.281 [2024-07-22 16:10:34.086873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.281 [2024-07-22 16:10:34.086934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.281 [2024-07-22 16:10:34.086952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b0000 with addr=10.0.0.2, port=4420 00:33:31.281 [2024-07-22 16:10:34.086963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0000 is same with the state(5) to be set 00:33:31.281 [2024-07-22 16:10:34.086983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b0000 (9): Bad file descriptor 00:33:31.281 [2024-07-22 16:10:34.087014] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:31.281 [2024-07-22 16:10:34.087025] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:31.281 [2024-07-22 16:10:34.087036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:31.281 [2024-07-22 16:10:34.087056] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:31.281 [2024-07-22 16:10:34.087068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:31.281 16:10:34 -- host/timeout.sh@90 -- # sleep 1 00:33:32.655 [2024-07-22 16:10:35.087212] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.655 [2024-07-22 16:10:35.087327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.655 [2024-07-22 16:10:35.087376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.655 [2024-07-22 16:10:35.087394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b0000 with addr=10.0.0.2, port=4420 00:33:32.655 [2024-07-22 16:10:35.087407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0000 is same with the state(5) to be set 00:33:32.655 [2024-07-22 16:10:35.087435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b0000 (9): Bad file descriptor 00:33:32.655 [2024-07-22 16:10:35.087467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:32.655 [2024-07-22 16:10:35.087479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:32.655 [2024-07-22 16:10:35.087512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:32.655 [2024-07-22 16:10:35.087543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:32.655 [2024-07-22 16:10:35.087554] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:32.655 16:10:35 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.655 [2024-07-22 16:10:35.379711] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.655 16:10:35 -- host/timeout.sh@92 -- # wait 73494 00:33:33.619 [2024-07-22 16:10:36.106135] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:40.174 00:33:40.174 Latency(us) 00:33:40.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.174 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:40.174 Verification LBA range: start 0x0 length 0x4000 00:33:40.174 NVMe0n1 : 10.01 8214.02 32.09 0.00 0.00 15552.30 1258.59 3019898.88 00:33:40.174 =================================================================================================================== 00:33:40.174 Total : 8214.02 32.09 0.00 0.00 15552.30 1258.59 3019898.88 00:33:40.174 0 00:33:40.174 16:10:43 -- host/timeout.sh@97 -- # rpc_pid=73600 00:33:40.174 16:10:43 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:40.174 16:10:43 -- host/timeout.sh@98 -- # sleep 1 00:33:40.432 Running I/O for 10 seconds... 00:33:41.366 16:10:44 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:41.626 [2024-07-22 16:10:44.401626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6560 is same with the state(5) to be set 00:33:41.626 [2024-07-22 16:10:44.401693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6560 is same with the state(5) to be set 00:33:41.626 [2024-07-22 16:10:44.401708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6560 is same with the state(5) to be set 00:33:41.626 [2024-07-22 16:10:44.401717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6560 is same with the state(5) to be set 00:33:41.626 [2024-07-22 16:10:44.401726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6560 is same with the state(5) to be set 00:33:41.626 [2024-07-22 16:10:44.402330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.626 [2024-07-22 16:10:44.402632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.626 [2024-07-22 16:10:44.402666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.402972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.402988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.403007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.626 [2024-07-22 16:10:44.403024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.626 [2024-07-22 16:10:44.403043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.626 [2024-07-22 16:10:44.403059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.403091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.403124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.403169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.403236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.403268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.403304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.403338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.403404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.403979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.403994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.404027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.404060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.404093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.404126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.404158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.404192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.404224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.404259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.404291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.404323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.627 [2024-07-22 16:10:44.404357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.627 [2024-07-22 16:10:44.404374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.627 [2024-07-22 16:10:44.404390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.628 [2024-07-22 16:10:44.404423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.628 [2024-07-22 16:10:44.404510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.628 [2024-07-22 16:10:44.404680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.628 [2024-07-22 16:10:44.404748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.628 [2024-07-22 16:10:44.404781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.628 [2024-07-22 16:10:44.404946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.404978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.404997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.628 [2024-07-22 16:10:44.405274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.628 [2024-07-22 16:10:44.405340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.628 [2024-07-22 16:10:44.405374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.628 [2024-07-22 16:10:44.405627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.628 [2024-07-22 16:10:44.405645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.405660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.405694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.405727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.405759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.405792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.405824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.405857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.405890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.405923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.405964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.405981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.405996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.629 [2024-07-22 16:10:44.406520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.629 [2024-07-22 16:10:44.406749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:41.629 [2024-07-22 16:10:44.406845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:41.629 [2024-07-22 16:10:44.406860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100328 len:8 PRP1 0x0 PRP2 0x0 00:33:41.629 [2024-07-22 16:10:44.406874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.629 [2024-07-22 16:10:44.406961] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12286c0 was disconnected and freed. reset controller. 00:33:41.630 [2024-07-22 16:10:44.407316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:41.630 [2024-07-22 16:10:44.407475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b0000 (9): Bad file descriptor 00:33:41.630 [2024-07-22 16:10:44.407665] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-22 16:10:44.407748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-22 16:10:44.407819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.630 [2024-07-22 16:10:44.407846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b0000 with addr=10.0.0.2, port=4420 00:33:41.630 [2024-07-22 16:10:44.407864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0000 is same with the state(5) to be set 00:33:41.630 [2024-07-22 16:10:44.407896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b0000 (9): Bad file descriptor 00:33:41.630 [2024-07-22 16:10:44.407925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:41.630 [2024-07-22 16:10:44.407939] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:41.630 [2024-07-22 16:10:44.407956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:41.630 [2024-07-22 16:10:44.407990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:41.630 [2024-07-22 16:10:44.408009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:41.630 16:10:44 -- host/timeout.sh@101 -- # sleep 3 00:33:42.564 [2024-07-22 16:10:45.408214] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-22 16:10:45.408373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-22 16:10:45.408458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.564 [2024-07-22 16:10:45.408512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b0000 with addr=10.0.0.2, port=4420 00:33:42.564 [2024-07-22 16:10:45.408537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0000 is same with the state(5) to be set 00:33:42.564 [2024-07-22 16:10:45.408580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b0000 (9): Bad file descriptor 00:33:42.564 [2024-07-22 16:10:45.408612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:42.564 [2024-07-22 16:10:45.408628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:42.564 [2024-07-22 16:10:45.408646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:42.564 [2024-07-22 16:10:45.408687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.564 [2024-07-22 16:10:45.408709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:43.937 [2024-07-22 16:10:46.408857] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.937 [2024-07-22 16:10:46.408966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.937 [2024-07-22 16:10:46.409014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.937 [2024-07-22 16:10:46.409031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b0000 with addr=10.0.0.2, port=4420 00:33:43.937 [2024-07-22 16:10:46.409045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0000 is same with the state(5) to be set 00:33:43.937 [2024-07-22 16:10:46.409072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b0000 (9): Bad file descriptor 00:33:43.937 [2024-07-22 16:10:46.409091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:43.937 [2024-07-22 16:10:46.409101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:43.937 [2024-07-22 16:10:46.409112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:43.937 [2024-07-22 16:10:46.409139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.937 [2024-07-22 16:10:46.409150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.871 [2024-07-22 16:10:47.410760] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.871 [2024-07-22 16:10:47.410875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.871 [2024-07-22 16:10:47.410935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.871 [2024-07-22 16:10:47.410953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b0000 with addr=10.0.0.2, port=4420 00:33:44.871 [2024-07-22 16:10:47.410967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0000 is same with the state(5) to be set 00:33:44.871 [2024-07-22 16:10:47.411164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b0000 (9): Bad file descriptor 00:33:44.871 [2024-07-22 16:10:47.411330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.871 [2024-07-22 16:10:47.411343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.871 [2024-07-22 16:10:47.411355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.871 [2024-07-22 16:10:47.414049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.871 [2024-07-22 16:10:47.414083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.871 16:10:47 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:45.143 [2024-07-22 16:10:47.754783] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.143 16:10:47 -- host/timeout.sh@103 -- # wait 73600 00:33:45.709 [2024-07-22 16:10:48.445353] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:50.975 00:33:50.975 Latency(us) 00:33:50.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.975 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:50.975 Verification LBA range: start 0x0 length 0x4000 00:33:50.975 NVMe0n1 : 10.01 7286.61 28.46 5611.81 0.00 9906.17 420.77 3019898.88 00:33:50.975 =================================================================================================================== 00:33:50.975 Total : 7286.61 28.46 5611.81 0.00 9906.17 0.00 3019898.88 00:33:50.975 0 00:33:50.975 16:10:53 -- host/timeout.sh@105 -- # killprocess 73466 00:33:50.975 16:10:53 -- common/autotest_common.sh@926 -- # '[' -z 73466 ']' 00:33:50.975 16:10:53 -- common/autotest_common.sh@930 -- # kill -0 73466 00:33:50.975 16:10:53 -- common/autotest_common.sh@931 -- # uname 00:33:50.975 16:10:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:50.975 16:10:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73466 00:33:50.975 killing process with pid 73466 00:33:50.975 Received shutdown signal, test time was about 10.000000 seconds 00:33:50.975 00:33:50.975 Latency(us) 00:33:50.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.975 =================================================================================================================== 00:33:50.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:50.975 16:10:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:33:50.975 16:10:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:33:50.975 16:10:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73466' 00:33:50.975 16:10:53 -- common/autotest_common.sh@945 -- # kill 73466 00:33:50.975 16:10:53 -- common/autotest_common.sh@950 -- # wait 73466 00:33:50.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:50.975 16:10:53 -- host/timeout.sh@110 -- # bdevperf_pid=73709 00:33:50.975 16:10:53 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:33:50.975 16:10:53 -- host/timeout.sh@112 -- # waitforlisten 73709 /var/tmp/bdevperf.sock 00:33:50.975 16:10:53 -- common/autotest_common.sh@819 -- # '[' -z 73709 ']' 00:33:50.975 16:10:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:50.975 16:10:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:50.975 16:10:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:50.975 16:10:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:50.975 16:10:53 -- common/autotest_common.sh@10 -- # set +x 00:33:50.975 [2024-07-22 16:10:53.473358] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:50.975 [2024-07-22 16:10:53.473877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73709 ] 00:33:50.975 [2024-07-22 16:10:53.610942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.975 [2024-07-22 16:10:53.697203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:51.909 16:10:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:51.909 16:10:54 -- common/autotest_common.sh@852 -- # return 0 00:33:51.909 16:10:54 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 73709 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:33:51.909 16:10:54 -- host/timeout.sh@116 -- # dtrace_pid=73725 00:33:51.909 16:10:54 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:33:51.909 16:10:54 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:33:52.167 NVMe0n1 00:33:52.425 16:10:55 -- host/timeout.sh@124 -- # rpc_pid=73772 00:33:52.425 16:10:55 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:52.425 16:10:55 -- host/timeout.sh@125 -- # sleep 1 00:33:52.425 Running I/O for 10 seconds... 00:33:53.359 16:10:56 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.620 [2024-07-22 16:10:56.334515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.620 [2024-07-22 16:10:56.334827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.620 [2024-07-22 16:10:56.335012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.620 [2024-07-22 16:10:56.335170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.620 [2024-07-22 16:10:56.335445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.620 [2024-07-22 16:10:56.335623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.620 [2024-07-22 16:10:56.335866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.620 [2024-07-22 16:10:56.335882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.620 [2024-07-22 16:10:56.335894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.620 [2024-07-22 16:10:56.335903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.620 [2024-07-22 16:10:56.335914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.620 [2024-07-22 16:10:56.335923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.620 [2024-07-22 16:10:56.335935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.620 [2024-07-22 16:10:56.335944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.620 [2024-07-22 16:10:56.335955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.620 [2024-07-22 16:10:56.335964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.335975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.335985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.335996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.621 [2024-07-22 16:10:56.336797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.621 [2024-07-22 16:10:56.336806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.336817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.336826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.336837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.336847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.336858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.336867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.336879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.336887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.336899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.336907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.336918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.336928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.336939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.336949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.336960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.336969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.336980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.336989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.622 [2024-07-22 16:10:56.337635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.622 [2024-07-22 16:10:56.337646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.337981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.337992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.623 [2024-07-22 16:10:56.338455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.623 [2024-07-22 16:10:56.338469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.624 [2024-07-22 16:10:56.338481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.624 [2024-07-22 16:10:56.338504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.624 [2024-07-22 16:10:56.338515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82560 is same with the state(5) to be set 00:33:53.624 [2024-07-22 16:10:56.338530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:53.624 [2024-07-22 16:10:56.338538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:53.624 [2024-07-22 16:10:56.338548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82496 len:8 PRP1 0x0 PRP2 0x0 00:33:53.624 [2024-07-22 16:10:56.338557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.624 [2024-07-22 16:10:56.338620] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f82560 was disconnected and freed. reset controller. 00:33:53.624 [2024-07-22 16:10:56.338721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.624 [2024-07-22 16:10:56.338738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.624 [2024-07-22 16:10:56.338749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.624 [2024-07-22 16:10:56.338759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.624 [2024-07-22 16:10:56.338769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.624 [2024-07-22 16:10:56.338778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.624 [2024-07-22 16:10:56.338788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.624 [2024-07-22 16:10:56.338797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.624 [2024-07-22 16:10:56.338806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3a3d0 is same with the state(5) to be set 00:33:53.624 [2024-07-22 16:10:56.339123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.624 [2024-07-22 16:10:56.339158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3a3d0 (9): Bad file descriptor 00:33:53.624 [2024-07-22 16:10:56.339284] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.624 [2024-07-22 16:10:56.339387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.624 [2024-07-22 16:10:56.339457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.624 [2024-07-22 16:10:56.339502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3a3d0 with addr=10.0.0.2, port=4420 00:33:53.624 [2024-07-22 16:10:56.339525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3a3d0 is same with the state(5) to be set 00:33:53.624 [2024-07-22 16:10:56.339558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3a3d0 (9): Bad file descriptor 00:33:53.624 [2024-07-22 16:10:56.339594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:53.624 [2024-07-22 16:10:56.339612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:53.624 [2024-07-22 16:10:56.339630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:53.624 [2024-07-22 16:10:56.339662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:53.624 [2024-07-22 16:10:56.339682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:53.624 16:10:56 -- host/timeout.sh@128 -- # wait 73772 00:33:55.523 [2024-07-22 16:10:58.339854] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-07-22 16:10:58.339996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-07-22 16:10:58.340047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:55.523 [2024-07-22 16:10:58.340065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3a3d0 with addr=10.0.0.2, port=4420 00:33:55.523 [2024-07-22 16:10:58.340078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3a3d0 is same with the state(5) to be set 00:33:55.523 [2024-07-22 16:10:58.340106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3a3d0 (9): Bad file descriptor 00:33:55.523 [2024-07-22 16:10:58.340152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:55.523 [2024-07-22 16:10:58.340173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:55.523 [2024-07-22 16:10:58.340186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:55.523 [2024-07-22 16:10:58.340214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:55.523 [2024-07-22 16:10:58.340226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.053 [2024-07-22 16:11:00.340436] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.053 [2024-07-22 16:11:00.340563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.053 [2024-07-22 16:11:00.340619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.053 [2024-07-22 16:11:00.340637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3a3d0 with addr=10.0.0.2, port=4420 00:33:58.053 [2024-07-22 16:11:00.340651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3a3d0 is same with the state(5) to be set 00:33:58.053 [2024-07-22 16:11:00.340679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3a3d0 (9): Bad file descriptor 00:33:58.053 [2024-07-22 16:11:00.340698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.053 [2024-07-22 16:11:00.340708] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.053 [2024-07-22 16:11:00.340719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.053 [2024-07-22 16:11:00.340747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.053 [2024-07-22 16:11:00.340759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.494 [2024-07-22 16:11:02.340826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.494 [2024-07-22 16:11:02.340903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.494 [2024-07-22 16:11:02.340916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.494 [2024-07-22 16:11:02.340927] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:33:59.494 [2024-07-22 16:11:02.340956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.866 00:34:00.866 Latency(us) 00:34:00.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.866 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:34:00.866 NVMe0n1 : 8.20 2014.50 7.87 15.62 0.00 62988.55 8519.68 7046430.72 00:34:00.866 =================================================================================================================== 00:34:00.866 Total : 2014.50 7.87 15.62 0.00 62988.55 8519.68 7046430.72 00:34:00.866 0 00:34:00.866 16:11:03 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:00.866 Attaching 5 probes... 00:34:00.866 1400.021812: reset bdev controller NVMe0 00:34:00.866 1400.111229: reconnect bdev controller NVMe0 00:34:00.866 3400.635405: reconnect delay bdev controller NVMe0 00:34:00.866 3400.658942: reconnect bdev controller NVMe0 00:34:00.866 5401.201995: reconnect delay bdev controller NVMe0 00:34:00.866 5401.230075: reconnect bdev controller NVMe0 00:34:00.866 7401.705913: reconnect delay bdev controller NVMe0 00:34:00.866 7401.733193: reconnect bdev controller NVMe0 00:34:00.866 16:11:03 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:34:00.866 16:11:03 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:34:00.866 16:11:03 -- host/timeout.sh@136 -- # kill 73725 00:34:00.866 16:11:03 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:00.866 16:11:03 -- host/timeout.sh@139 -- # killprocess 73709 00:34:00.866 16:11:03 -- common/autotest_common.sh@926 -- # '[' -z 73709 ']' 00:34:00.866 16:11:03 -- common/autotest_common.sh@930 -- # kill -0 73709 00:34:00.866 16:11:03 -- common/autotest_common.sh@931 -- # uname 00:34:00.866 16:11:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:00.866 16:11:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73709 00:34:00.866 killing process with pid 73709 00:34:00.866 Received shutdown signal, test time was about 8.254084 seconds 00:34:00.866 00:34:00.866 Latency(us) 00:34:00.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.866 =================================================================================================================== 00:34:00.866 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.866 16:11:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:34:00.866 16:11:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:34:00.866 16:11:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73709' 00:34:00.866 16:11:03 -- common/autotest_common.sh@945 -- # kill 73709 00:34:00.866 16:11:03 -- common/autotest_common.sh@950 -- # wait 73709 00:34:00.866 16:11:03 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:01.123 16:11:03 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:34:01.123 16:11:03 -- host/timeout.sh@145 -- # nvmftestfini 00:34:01.123 16:11:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:01.123 16:11:03 -- nvmf/common.sh@116 -- # sync 00:34:01.123 16:11:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:01.123 16:11:03 -- nvmf/common.sh@119 -- # set +e 00:34:01.123 16:11:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:01.123 16:11:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:01.123 rmmod nvme_tcp 00:34:01.123 rmmod nvme_fabrics 00:34:01.123 rmmod nvme_keyring 00:34:01.123 16:11:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:01.123 16:11:03 -- nvmf/common.sh@123 -- # set -e 00:34:01.123 16:11:03 -- nvmf/common.sh@124 -- # return 0 00:34:01.123 16:11:03 -- nvmf/common.sh@477 -- # '[' -n 73266 ']' 00:34:01.123 16:11:03 -- nvmf/common.sh@478 -- # killprocess 73266 00:34:01.123 16:11:03 -- common/autotest_common.sh@926 -- # '[' -z 73266 ']' 00:34:01.123 16:11:03 -- common/autotest_common.sh@930 -- # kill -0 73266 00:34:01.123 16:11:03 -- common/autotest_common.sh@931 -- # uname 00:34:01.123 16:11:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:01.123 16:11:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73266 00:34:01.123 killing process with pid 73266 00:34:01.123 16:11:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:01.123 16:11:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:01.123 16:11:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73266' 00:34:01.123 16:11:03 -- common/autotest_common.sh@945 -- # kill 73266 00:34:01.123 16:11:03 -- common/autotest_common.sh@950 -- # wait 73266 00:34:01.380 16:11:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:01.380 16:11:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:01.380 16:11:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:01.380 16:11:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:01.380 16:11:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:01.380 16:11:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.380 16:11:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:01.380 16:11:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.380 16:11:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:34:01.380 ************************************ 00:34:01.380 END TEST nvmf_timeout 00:34:01.380 ************************************ 00:34:01.380 00:34:01.380 real 0m47.538s 00:34:01.380 user 2m20.194s 00:34:01.380 sys 0m5.711s 00:34:01.380 16:11:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.380 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:01.380 16:11:04 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:34:01.380 16:11:04 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:34:01.380 16:11:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:01.380 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:01.380 16:11:04 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:34:01.380 ************************************ 00:34:01.380 END TEST nvmf_tcp 00:34:01.380 ************************************ 00:34:01.380 00:34:01.380 real 10m37.775s 00:34:01.380 user 29m56.091s 00:34:01.380 sys 3m20.513s 00:34:01.380 16:11:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.380 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:01.638 16:11:04 -- spdk/autotest.sh@296 -- # [[ 1 -eq 0 ]] 00:34:01.638 16:11:04 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:34:01.638 16:11:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:01.638 16:11:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:01.638 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:01.638 ************************************ 00:34:01.638 START TEST nvmf_dif 00:34:01.638 ************************************ 00:34:01.638 16:11:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:34:01.638 * Looking for test storage... 00:34:01.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:01.638 16:11:04 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:01.638 16:11:04 -- nvmf/common.sh@7 -- # uname -s 00:34:01.638 16:11:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.638 16:11:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.638 16:11:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.638 16:11:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.638 16:11:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.638 16:11:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.638 16:11:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.638 16:11:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.638 16:11:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.638 16:11:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.638 16:11:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:34:01.638 16:11:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:34:01.638 16:11:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.638 16:11:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.638 16:11:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:01.638 16:11:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:01.638 16:11:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.638 16:11:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.638 16:11:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.638 16:11:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.638 16:11:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.638 16:11:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.638 16:11:04 -- paths/export.sh@5 -- # export PATH 00:34:01.638 16:11:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.638 16:11:04 -- nvmf/common.sh@46 -- # : 0 00:34:01.638 16:11:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:01.638 16:11:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:01.638 16:11:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:01.638 16:11:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.638 16:11:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.638 16:11:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:01.638 16:11:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:01.638 16:11:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:01.638 16:11:04 -- target/dif.sh@15 -- # NULL_META=16 00:34:01.638 16:11:04 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:01.638 16:11:04 -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:01.638 16:11:04 -- target/dif.sh@15 -- # NULL_DIF=1 00:34:01.638 16:11:04 -- target/dif.sh@135 -- # nvmftestinit 00:34:01.638 16:11:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:01.638 16:11:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:01.638 16:11:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:01.638 16:11:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:01.638 16:11:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:01.638 16:11:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.638 16:11:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:01.638 16:11:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.638 16:11:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:34:01.638 16:11:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:34:01.638 16:11:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:34:01.638 16:11:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:34:01.638 16:11:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:34:01.638 16:11:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:34:01.638 16:11:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.638 16:11:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.638 16:11:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:01.638 16:11:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:34:01.638 16:11:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:01.638 16:11:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:01.638 16:11:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:01.638 16:11:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.638 16:11:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:01.638 16:11:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:01.638 16:11:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:01.638 16:11:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:01.638 16:11:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:34:01.638 16:11:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:34:01.638 Cannot find device "nvmf_tgt_br" 00:34:01.638 16:11:04 -- nvmf/common.sh@154 -- # true 00:34:01.638 16:11:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:34:01.638 Cannot find device "nvmf_tgt_br2" 00:34:01.638 16:11:04 -- nvmf/common.sh@155 -- # true 00:34:01.638 16:11:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:34:01.638 16:11:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:34:01.638 Cannot find device "nvmf_tgt_br" 00:34:01.638 16:11:04 -- nvmf/common.sh@157 -- # true 00:34:01.638 16:11:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:34:01.638 Cannot find device "nvmf_tgt_br2" 00:34:01.638 16:11:04 -- nvmf/common.sh@158 -- # true 00:34:01.638 16:11:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:34:01.638 16:11:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:34:01.896 16:11:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:01.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:01.896 16:11:04 -- nvmf/common.sh@161 -- # true 00:34:01.896 16:11:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:01.896 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:01.896 16:11:04 -- nvmf/common.sh@162 -- # true 00:34:01.896 16:11:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:34:01.896 16:11:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:01.896 16:11:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:01.896 16:11:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:01.896 16:11:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:01.896 16:11:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:01.896 16:11:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:01.896 16:11:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:01.896 16:11:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:01.896 16:11:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:34:01.896 16:11:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:34:01.896 16:11:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:34:01.896 16:11:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:34:01.896 16:11:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:01.896 16:11:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:01.896 16:11:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:01.896 16:11:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:34:01.896 16:11:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:34:01.896 16:11:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:34:01.896 16:11:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:01.896 16:11:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:01.896 16:11:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:01.896 16:11:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:01.897 16:11:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:34:01.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:34:01.897 00:34:01.897 --- 10.0.0.2 ping statistics --- 00:34:01.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.897 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:34:01.897 16:11:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:34:01.897 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:01.897 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:34:01.897 00:34:01.897 --- 10.0.0.3 ping statistics --- 00:34:01.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.897 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:34:01.897 16:11:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:01.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:34:01.897 00:34:01.897 --- 10.0.0.1 ping statistics --- 00:34:01.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.897 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:34:01.897 16:11:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.897 16:11:04 -- nvmf/common.sh@421 -- # return 0 00:34:01.897 16:11:04 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:01.897 16:11:04 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:02.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:02.414 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:02.414 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:02.414 16:11:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:02.414 16:11:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:02.414 16:11:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:02.414 16:11:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:02.414 16:11:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:02.414 16:11:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:02.414 16:11:05 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:02.414 16:11:05 -- target/dif.sh@137 -- # nvmfappstart 00:34:02.414 16:11:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:02.414 16:11:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:02.414 16:11:05 -- common/autotest_common.sh@10 -- # set +x 00:34:02.414 16:11:05 -- nvmf/common.sh@469 -- # nvmfpid=74217 00:34:02.414 16:11:05 -- nvmf/common.sh@470 -- # waitforlisten 74217 00:34:02.414 16:11:05 -- common/autotest_common.sh@819 -- # '[' -z 74217 ']' 00:34:02.414 16:11:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:02.414 16:11:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.414 16:11:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:02.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.414 16:11:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.414 16:11:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:02.414 16:11:05 -- common/autotest_common.sh@10 -- # set +x 00:34:02.414 [2024-07-22 16:11:05.145077] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:02.414 [2024-07-22 16:11:05.145185] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.683 [2024-07-22 16:11:05.285322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.683 [2024-07-22 16:11:05.341643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:02.683 [2024-07-22 16:11:05.341814] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.683 [2024-07-22 16:11:05.341836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.683 [2024-07-22 16:11:05.341849] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.683 [2024-07-22 16:11:05.341894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.630 16:11:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:03.630 16:11:06 -- common/autotest_common.sh@852 -- # return 0 00:34:03.630 16:11:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:03.630 16:11:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:03.630 16:11:06 -- common/autotest_common.sh@10 -- # set +x 00:34:03.630 16:11:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:03.630 16:11:06 -- target/dif.sh@139 -- # create_transport 00:34:03.630 16:11:06 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:03.630 16:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.630 16:11:06 -- common/autotest_common.sh@10 -- # set +x 00:34:03.630 [2024-07-22 16:11:06.164949] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:03.630 16:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.630 16:11:06 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:03.630 16:11:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:03.630 16:11:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:03.630 16:11:06 -- common/autotest_common.sh@10 -- # set +x 00:34:03.630 ************************************ 00:34:03.630 START TEST fio_dif_1_default 00:34:03.630 ************************************ 00:34:03.630 16:11:06 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:34:03.630 16:11:06 -- target/dif.sh@86 -- # create_subsystems 0 00:34:03.630 16:11:06 -- target/dif.sh@28 -- # local sub 00:34:03.630 16:11:06 -- target/dif.sh@30 -- # for sub in "$@" 00:34:03.630 16:11:06 -- target/dif.sh@31 -- # create_subsystem 0 00:34:03.630 16:11:06 -- target/dif.sh@18 -- # local sub_id=0 00:34:03.630 16:11:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:03.630 16:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.630 16:11:06 -- common/autotest_common.sh@10 -- # set +x 00:34:03.630 bdev_null0 00:34:03.630 16:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.630 16:11:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:03.630 16:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.630 16:11:06 -- common/autotest_common.sh@10 -- # set +x 00:34:03.630 16:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.630 16:11:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:03.630 16:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.630 16:11:06 -- common/autotest_common.sh@10 -- # set +x 00:34:03.630 16:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.631 16:11:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:03.631 16:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:03.631 16:11:06 -- common/autotest_common.sh@10 -- # set +x 00:34:03.631 [2024-07-22 16:11:06.209075] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.631 16:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:03.631 16:11:06 -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:03.631 16:11:06 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:03.631 16:11:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:03.631 16:11:06 -- nvmf/common.sh@520 -- # config=() 00:34:03.631 16:11:06 -- nvmf/common.sh@520 -- # local subsystem config 00:34:03.631 16:11:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:03.631 16:11:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.631 16:11:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:03.631 { 00:34:03.631 "params": { 00:34:03.631 "name": "Nvme$subsystem", 00:34:03.631 "trtype": "$TEST_TRANSPORT", 00:34:03.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.631 "adrfam": "ipv4", 00:34:03.631 "trsvcid": "$NVMF_PORT", 00:34:03.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.631 "hdgst": ${hdgst:-false}, 00:34:03.631 "ddgst": ${ddgst:-false} 00:34:03.631 }, 00:34:03.631 "method": "bdev_nvme_attach_controller" 00:34:03.631 } 00:34:03.631 EOF 00:34:03.631 )") 00:34:03.631 16:11:06 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.631 16:11:06 -- target/dif.sh@82 -- # gen_fio_conf 00:34:03.631 16:11:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:03.631 16:11:06 -- target/dif.sh@54 -- # local file 00:34:03.631 16:11:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:03.631 16:11:06 -- target/dif.sh@56 -- # cat 00:34:03.631 16:11:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:03.631 16:11:06 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:03.631 16:11:06 -- common/autotest_common.sh@1320 -- # shift 00:34:03.631 16:11:06 -- nvmf/common.sh@542 -- # cat 00:34:03.631 16:11:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:03.631 16:11:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:03.631 16:11:06 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:03.631 16:11:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:03.631 16:11:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:03.631 16:11:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:03.631 16:11:06 -- target/dif.sh@72 -- # (( file <= files )) 00:34:03.631 16:11:06 -- nvmf/common.sh@544 -- # jq . 00:34:03.631 16:11:06 -- nvmf/common.sh@545 -- # IFS=, 00:34:03.631 16:11:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:03.631 "params": { 00:34:03.631 "name": "Nvme0", 00:34:03.631 "trtype": "tcp", 00:34:03.631 "traddr": "10.0.0.2", 00:34:03.631 "adrfam": "ipv4", 00:34:03.631 "trsvcid": "4420", 00:34:03.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:03.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:03.631 "hdgst": false, 00:34:03.631 "ddgst": false 00:34:03.631 }, 00:34:03.631 "method": "bdev_nvme_attach_controller" 00:34:03.631 }' 00:34:03.631 16:11:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:03.631 16:11:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:03.631 16:11:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:03.631 16:11:06 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:03.631 16:11:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:03.631 16:11:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:03.631 16:11:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:03.631 16:11:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:03.631 16:11:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:03.631 16:11:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:03.631 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:03.631 fio-3.35 00:34:03.631 Starting 1 thread 00:34:03.890 [2024-07-22 16:11:06.731871] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:03.890 [2024-07-22 16:11:06.731944] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:16.091 00:34:16.091 filename0: (groupid=0, jobs=1): err= 0: pid=74279: Mon Jul 22 16:11:16 2024 00:34:16.091 read: IOPS=8592, BW=33.6MiB/s (35.2MB/s)(336MiB/10001msec) 00:34:16.091 slat (nsec): min=6717, max=61187, avg=8862.81, stdev=2510.27 00:34:16.091 clat (usec): min=361, max=5112, avg=439.29, stdev=43.77 00:34:16.091 lat (usec): min=368, max=5145, avg=448.16, stdev=44.20 00:34:16.091 clat percentiles (usec): 00:34:16.091 | 1.00th=[ 400], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:34:16.091 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 441], 00:34:16.091 | 70.00th=[ 449], 80.00th=[ 453], 90.00th=[ 465], 95.00th=[ 482], 00:34:16.091 | 99.00th=[ 529], 99.50th=[ 578], 99.90th=[ 619], 99.95th=[ 635], 00:34:16.091 | 99.99th=[ 693] 00:34:16.091 bw ( KiB/s): min=33920, max=35232, per=100.00%, avg=34480.84, stdev=351.57, samples=19 00:34:16.091 iops : min= 8480, max= 8808, avg=8620.21, stdev=87.89, samples=19 00:34:16.091 lat (usec) : 500=97.98%, 750=2.01% 00:34:16.091 lat (msec) : 4=0.01%, 10=0.01% 00:34:16.091 cpu : usr=84.81%, sys=13.16%, ctx=33, majf=0, minf=0 00:34:16.091 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.091 issued rwts: total=85932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.091 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:16.091 00:34:16.091 Run status group 0 (all jobs): 00:34:16.091 READ: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=336MiB (352MB), run=10001-10001msec 00:34:16.091 16:11:17 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:16.091 16:11:17 -- target/dif.sh@43 -- # local sub 00:34:16.091 16:11:17 -- target/dif.sh@45 -- # for sub in "$@" 00:34:16.091 16:11:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:16.091 16:11:17 -- target/dif.sh@36 -- # local sub_id=0 00:34:16.091 16:11:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:16.091 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.091 16:11:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:16.091 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 ************************************ 00:34:16.091 END TEST fio_dif_1_default 00:34:16.091 ************************************ 00:34:16.091 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.091 00:34:16.091 real 0m10.859s 00:34:16.091 user 0m9.018s 00:34:16.091 sys 0m1.530s 00:34:16.091 16:11:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 16:11:17 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:16.091 16:11:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:16.091 16:11:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 ************************************ 00:34:16.091 START TEST fio_dif_1_multi_subsystems 00:34:16.091 ************************************ 00:34:16.091 16:11:17 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:34:16.091 16:11:17 -- target/dif.sh@92 -- # local files=1 00:34:16.091 16:11:17 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:16.091 16:11:17 -- target/dif.sh@28 -- # local sub 00:34:16.091 16:11:17 -- target/dif.sh@30 -- # for sub in "$@" 00:34:16.091 16:11:17 -- target/dif.sh@31 -- # create_subsystem 0 00:34:16.091 16:11:17 -- target/dif.sh@18 -- # local sub_id=0 00:34:16.091 16:11:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:16.091 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 bdev_null0 00:34:16.091 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.091 16:11:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:16.091 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.091 16:11:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:16.091 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.091 16:11:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.091 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 [2024-07-22 16:11:17.114151] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.091 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.091 16:11:17 -- target/dif.sh@30 -- # for sub in "$@" 00:34:16.091 16:11:17 -- target/dif.sh@31 -- # create_subsystem 1 00:34:16.091 16:11:17 -- target/dif.sh@18 -- # local sub_id=1 00:34:16.091 16:11:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:16.091 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 bdev_null1 00:34:16.091 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.091 16:11:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:16.091 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.091 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.091 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.092 16:11:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:16.092 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.092 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.092 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.092 16:11:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.092 16:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.092 16:11:17 -- common/autotest_common.sh@10 -- # set +x 00:34:16.092 16:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.092 16:11:17 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:16.092 16:11:17 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:16.092 16:11:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:16.092 16:11:17 -- nvmf/common.sh@520 -- # config=() 00:34:16.092 16:11:17 -- nvmf/common.sh@520 -- # local subsystem config 00:34:16.092 16:11:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:16.092 16:11:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:16.092 { 00:34:16.092 "params": { 00:34:16.092 "name": "Nvme$subsystem", 00:34:16.092 "trtype": "$TEST_TRANSPORT", 00:34:16.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.092 "adrfam": "ipv4", 00:34:16.092 "trsvcid": "$NVMF_PORT", 00:34:16.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.092 "hdgst": ${hdgst:-false}, 00:34:16.092 "ddgst": ${ddgst:-false} 00:34:16.092 }, 00:34:16.092 "method": "bdev_nvme_attach_controller" 00:34:16.092 } 00:34:16.092 EOF 00:34:16.092 )") 00:34:16.092 16:11:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.092 16:11:17 -- target/dif.sh@82 -- # gen_fio_conf 00:34:16.092 16:11:17 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.092 16:11:17 -- target/dif.sh@54 -- # local file 00:34:16.092 16:11:17 -- target/dif.sh@56 -- # cat 00:34:16.092 16:11:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:16.092 16:11:17 -- nvmf/common.sh@542 -- # cat 00:34:16.092 16:11:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:16.092 16:11:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:16.092 16:11:17 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:16.092 16:11:17 -- common/autotest_common.sh@1320 -- # shift 00:34:16.092 16:11:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:16.092 16:11:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.092 16:11:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:16.092 16:11:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:16.092 { 00:34:16.092 "params": { 00:34:16.092 "name": "Nvme$subsystem", 00:34:16.092 "trtype": "$TEST_TRANSPORT", 00:34:16.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.092 "adrfam": "ipv4", 00:34:16.092 "trsvcid": "$NVMF_PORT", 00:34:16.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.092 "hdgst": ${hdgst:-false}, 00:34:16.092 "ddgst": ${ddgst:-false} 00:34:16.092 }, 00:34:16.092 "method": "bdev_nvme_attach_controller" 00:34:16.092 } 00:34:16.092 EOF 00:34:16.092 )") 00:34:16.092 16:11:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:16.092 16:11:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:16.092 16:11:17 -- target/dif.sh@72 -- # (( file <= files )) 00:34:16.092 16:11:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:16.092 16:11:17 -- target/dif.sh@73 -- # cat 00:34:16.092 16:11:17 -- nvmf/common.sh@542 -- # cat 00:34:16.092 16:11:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:16.092 16:11:17 -- target/dif.sh@72 -- # (( file++ )) 00:34:16.092 16:11:17 -- target/dif.sh@72 -- # (( file <= files )) 00:34:16.092 16:11:17 -- nvmf/common.sh@544 -- # jq . 00:34:16.092 16:11:17 -- nvmf/common.sh@545 -- # IFS=, 00:34:16.092 16:11:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:16.092 "params": { 00:34:16.092 "name": "Nvme0", 00:34:16.092 "trtype": "tcp", 00:34:16.092 "traddr": "10.0.0.2", 00:34:16.092 "adrfam": "ipv4", 00:34:16.092 "trsvcid": "4420", 00:34:16.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:16.092 "hdgst": false, 00:34:16.092 "ddgst": false 00:34:16.092 }, 00:34:16.092 "method": "bdev_nvme_attach_controller" 00:34:16.092 },{ 00:34:16.092 "params": { 00:34:16.092 "name": "Nvme1", 00:34:16.092 "trtype": "tcp", 00:34:16.092 "traddr": "10.0.0.2", 00:34:16.092 "adrfam": "ipv4", 00:34:16.092 "trsvcid": "4420", 00:34:16.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:16.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:16.092 "hdgst": false, 00:34:16.092 "ddgst": false 00:34:16.092 }, 00:34:16.092 "method": "bdev_nvme_attach_controller" 00:34:16.092 }' 00:34:16.092 16:11:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:16.092 16:11:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:16.092 16:11:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.092 16:11:17 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:16.092 16:11:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:16.092 16:11:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:16.092 16:11:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:16.092 16:11:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:16.092 16:11:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:16.092 16:11:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.092 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:16.092 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:16.092 fio-3.35 00:34:16.092 Starting 2 threads 00:34:16.092 [2024-07-22 16:11:17.739160] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:16.092 [2024-07-22 16:11:17.739231] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:26.060 00:34:26.060 filename0: (groupid=0, jobs=1): err= 0: pid=74438: Mon Jul 22 16:11:27 2024 00:34:26.060 read: IOPS=4569, BW=17.9MiB/s (18.7MB/s)(179MiB/10001msec) 00:34:26.060 slat (usec): min=6, max=114, avg=14.50, stdev= 4.89 00:34:26.060 clat (usec): min=629, max=2339, avg=832.82, stdev=98.94 00:34:26.060 lat (usec): min=640, max=2350, avg=847.32, stdev=99.56 00:34:26.060 clat percentiles (usec): 00:34:26.060 | 1.00th=[ 725], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 775], 00:34:26.060 | 30.00th=[ 783], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:34:26.060 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 1020], 95.00th=[ 1057], 00:34:26.060 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1254], 00:34:26.060 | 99.99th=[ 1811] 00:34:26.060 bw ( KiB/s): min=15232, max=19655, per=50.08%, avg=18309.89, stdev=1283.96, samples=19 00:34:26.060 iops : min= 3808, max= 4913, avg=4577.37, stdev=320.97, samples=19 00:34:26.060 lat (usec) : 750=5.93%, 1000=82.14% 00:34:26.060 lat (msec) : 2=11.92%, 4=0.01% 00:34:26.060 cpu : usr=89.31%, sys=8.93%, ctx=19, majf=0, minf=0 00:34:26.060 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.060 issued rwts: total=45704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.060 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:26.060 filename1: (groupid=0, jobs=1): err= 0: pid=74439: Mon Jul 22 16:11:27 2024 00:34:26.060 read: IOPS=4569, BW=17.9MiB/s (18.7MB/s)(179MiB/10001msec) 00:34:26.060 slat (nsec): min=5229, max=62158, avg=13217.92, stdev=4219.73 00:34:26.060 clat (usec): min=630, max=2514, avg=838.73, stdev=103.07 00:34:26.060 lat (usec): min=637, max=2540, avg=851.95, stdev=103.12 00:34:26.060 clat percentiles (usec): 00:34:26.060 | 1.00th=[ 709], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 775], 00:34:26.060 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 816], 00:34:26.060 | 70.00th=[ 832], 80.00th=[ 865], 90.00th=[ 1029], 95.00th=[ 1074], 00:34:26.060 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1221], 99.95th=[ 1287], 00:34:26.060 | 99.99th=[ 2180] 00:34:26.060 bw ( KiB/s): min=15232, max=19655, per=50.09%, avg=18311.58, stdev=1281.98, samples=19 00:34:26.060 iops : min= 3808, max= 4913, avg=4577.79, stdev=320.47, samples=19 00:34:26.060 lat (usec) : 750=7.33%, 1000=80.12% 00:34:26.060 lat (msec) : 2=12.53%, 4=0.02% 00:34:26.060 cpu : usr=88.89%, sys=9.54%, ctx=15, majf=0, minf=0 00:34:26.060 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.060 issued rwts: total=45703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.061 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:26.061 00:34:26.061 Run status group 0 (all jobs): 00:34:26.061 READ: bw=35.7MiB/s (37.4MB/s), 17.9MiB/s-17.9MiB/s (18.7MB/s-18.7MB/s), io=357MiB (374MB), run=10001-10001msec 00:34:26.061 16:11:28 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:26.061 16:11:28 -- target/dif.sh@43 -- # local sub 00:34:26.061 16:11:28 -- target/dif.sh@45 -- # for sub in "$@" 00:34:26.061 16:11:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:26.061 16:11:28 -- target/dif.sh@36 -- # local sub_id=0 00:34:26.061 16:11:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:26.061 16:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 16:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:26.061 16:11:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:26.061 16:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 16:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:26.061 16:11:28 -- target/dif.sh@45 -- # for sub in "$@" 00:34:26.061 16:11:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:26.061 16:11:28 -- target/dif.sh@36 -- # local sub_id=1 00:34:26.061 16:11:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:26.061 16:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 16:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:26.061 16:11:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:26.061 16:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 ************************************ 00:34:26.061 END TEST fio_dif_1_multi_subsystems 00:34:26.061 ************************************ 00:34:26.061 16:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:26.061 00:34:26.061 real 0m10.963s 00:34:26.061 user 0m18.481s 00:34:26.061 sys 0m2.059s 00:34:26.061 16:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 16:11:28 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:26.061 16:11:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:26.061 16:11:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 ************************************ 00:34:26.061 START TEST fio_dif_rand_params 00:34:26.061 ************************************ 00:34:26.061 16:11:28 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:26.061 16:11:28 -- target/dif.sh@100 -- # local NULL_DIF 00:34:26.061 16:11:28 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:26.061 16:11:28 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:26.061 16:11:28 -- target/dif.sh@103 -- # bs=128k 00:34:26.061 16:11:28 -- target/dif.sh@103 -- # numjobs=3 00:34:26.061 16:11:28 -- target/dif.sh@103 -- # iodepth=3 00:34:26.061 16:11:28 -- target/dif.sh@103 -- # runtime=5 00:34:26.061 16:11:28 -- target/dif.sh@105 -- # create_subsystems 0 00:34:26.061 16:11:28 -- target/dif.sh@28 -- # local sub 00:34:26.061 16:11:28 -- target/dif.sh@30 -- # for sub in "$@" 00:34:26.061 16:11:28 -- target/dif.sh@31 -- # create_subsystem 0 00:34:26.061 16:11:28 -- target/dif.sh@18 -- # local sub_id=0 00:34:26.061 16:11:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:26.061 16:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 bdev_null0 00:34:26.061 16:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:26.061 16:11:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:26.061 16:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 16:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:26.061 16:11:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:26.061 16:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 16:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:26.061 16:11:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:26.061 16:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:26.061 16:11:28 -- common/autotest_common.sh@10 -- # set +x 00:34:26.061 [2024-07-22 16:11:28.131386] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.061 16:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:26.061 16:11:28 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:26.061 16:11:28 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:26.061 16:11:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.061 16:11:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:26.061 16:11:28 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.061 16:11:28 -- nvmf/common.sh@520 -- # config=() 00:34:26.061 16:11:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:26.061 16:11:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:26.061 16:11:28 -- nvmf/common.sh@520 -- # local subsystem config 00:34:26.061 16:11:28 -- target/dif.sh@82 -- # gen_fio_conf 00:34:26.061 16:11:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:26.061 16:11:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:26.061 16:11:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:26.061 16:11:28 -- target/dif.sh@54 -- # local file 00:34:26.061 16:11:28 -- common/autotest_common.sh@1320 -- # shift 00:34:26.061 16:11:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:26.061 { 00:34:26.061 "params": { 00:34:26.061 "name": "Nvme$subsystem", 00:34:26.061 "trtype": "$TEST_TRANSPORT", 00:34:26.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.061 "adrfam": "ipv4", 00:34:26.061 "trsvcid": "$NVMF_PORT", 00:34:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.061 "hdgst": ${hdgst:-false}, 00:34:26.061 "ddgst": ${ddgst:-false} 00:34:26.061 }, 00:34:26.061 "method": "bdev_nvme_attach_controller" 00:34:26.061 } 00:34:26.061 EOF 00:34:26.061 )") 00:34:26.061 16:11:28 -- target/dif.sh@56 -- # cat 00:34:26.061 16:11:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:26.061 16:11:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:26.061 16:11:28 -- nvmf/common.sh@542 -- # cat 00:34:26.061 16:11:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:26.061 16:11:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:26.061 16:11:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:26.061 16:11:28 -- target/dif.sh@72 -- # (( file <= files )) 00:34:26.061 16:11:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:26.061 16:11:28 -- nvmf/common.sh@544 -- # jq . 00:34:26.061 16:11:28 -- nvmf/common.sh@545 -- # IFS=, 00:34:26.061 16:11:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:26.061 "params": { 00:34:26.061 "name": "Nvme0", 00:34:26.061 "trtype": "tcp", 00:34:26.061 "traddr": "10.0.0.2", 00:34:26.061 "adrfam": "ipv4", 00:34:26.061 "trsvcid": "4420", 00:34:26.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:26.061 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:26.061 "hdgst": false, 00:34:26.061 "ddgst": false 00:34:26.061 }, 00:34:26.061 "method": "bdev_nvme_attach_controller" 00:34:26.061 }' 00:34:26.061 16:11:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:26.061 16:11:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:26.061 16:11:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:26.061 16:11:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:26.061 16:11:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:26.061 16:11:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:26.061 16:11:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:26.061 16:11:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:26.061 16:11:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:26.061 16:11:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:26.061 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:26.061 ... 00:34:26.061 fio-3.35 00:34:26.061 Starting 3 threads 00:34:26.061 [2024-07-22 16:11:28.670908] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:26.061 [2024-07-22 16:11:28.670995] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:31.327 00:34:31.327 filename0: (groupid=0, jobs=1): err= 0: pid=74599: Mon Jul 22 16:11:33 2024 00:34:31.327 read: IOPS=247, BW=31.0MiB/s (32.5MB/s)(155MiB/5010msec) 00:34:31.327 slat (nsec): min=4929, max=95333, avg=19000.87, stdev=7079.92 00:34:31.327 clat (usec): min=11376, max=17780, avg=12054.88, stdev=1017.11 00:34:31.327 lat (usec): min=11390, max=17817, avg=12073.88, stdev=1017.11 00:34:31.327 clat percentiles (usec): 00:34:31.327 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:34:31.327 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:34:31.327 | 70.00th=[11731], 80.00th=[12125], 90.00th=[14091], 95.00th=[14353], 00:34:31.327 | 99.00th=[15008], 99.50th=[15664], 99.90th=[17695], 99.95th=[17695], 00:34:31.327 | 99.99th=[17695] 00:34:31.327 bw ( KiB/s): min=26880, max=33024, per=33.33%, avg=31724.70, stdev=2080.65, samples=10 00:34:31.327 iops : min= 210, max= 258, avg=247.80, stdev=16.26, samples=10 00:34:31.327 lat (msec) : 20=100.00% 00:34:31.327 cpu : usr=91.56%, sys=7.85%, ctx=5, majf=0, minf=9 00:34:31.327 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.327 issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.327 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:31.327 filename0: (groupid=0, jobs=1): err= 0: pid=74600: Mon Jul 22 16:11:33 2024 00:34:31.327 read: IOPS=247, BW=31.0MiB/s (32.5MB/s)(155MiB/5009msec) 00:34:31.327 slat (nsec): min=5538, max=93237, avg=18717.62, stdev=6944.08 00:34:31.327 clat (usec): min=11429, max=17788, avg=12052.92, stdev=1008.47 00:34:31.327 lat (usec): min=11451, max=17820, avg=12071.64, stdev=1008.48 00:34:31.327 clat percentiles (usec): 00:34:31.327 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:34:31.327 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:34:31.327 | 70.00th=[11731], 80.00th=[12125], 90.00th=[14091], 95.00th=[14484], 00:34:31.327 | 99.00th=[15008], 99.50th=[15270], 99.90th=[17695], 99.95th=[17695], 00:34:31.327 | 99.99th=[17695] 00:34:31.327 bw ( KiB/s): min=26880, max=33024, per=33.33%, avg=31718.40, stdev=2081.33, samples=10 00:34:31.327 iops : min= 210, max= 258, avg=247.80, stdev=16.26, samples=10 00:34:31.327 lat (msec) : 20=100.00% 00:34:31.327 cpu : usr=91.57%, sys=7.85%, ctx=8, majf=0, minf=9 00:34:31.327 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.327 issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.327 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:31.327 filename0: (groupid=0, jobs=1): err= 0: pid=74601: Mon Jul 22 16:11:33 2024 00:34:31.327 read: IOPS=247, BW=31.0MiB/s (32.5MB/s)(155MiB/5011msec) 00:34:31.327 slat (nsec): min=3729, max=80204, avg=18331.26, stdev=7505.22 00:34:31.327 clat (usec): min=11371, max=17777, avg=12057.91, stdev=1027.32 00:34:31.327 lat (usec): min=11386, max=17815, avg=12076.25, stdev=1027.28 00:34:31.327 clat percentiles (usec): 00:34:31.327 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:34:31.327 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:34:31.327 | 70.00th=[11731], 80.00th=[12125], 90.00th=[14091], 95.00th=[14484], 00:34:31.327 | 99.00th=[15139], 99.50th=[15664], 99.90th=[17695], 99.95th=[17695], 00:34:31.327 | 99.99th=[17695] 00:34:31.327 bw ( KiB/s): min=26880, max=33024, per=33.33%, avg=31718.40, stdev=2081.33, samples=10 00:34:31.327 iops : min= 210, max= 258, avg=247.80, stdev=16.26, samples=10 00:34:31.327 lat (msec) : 20=100.00% 00:34:31.327 cpu : usr=91.42%, sys=7.92%, ctx=15, majf=0, minf=0 00:34:31.327 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.327 issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.327 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:31.327 00:34:31.327 Run status group 0 (all jobs): 00:34:31.327 READ: bw=92.9MiB/s (97.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=466MiB (488MB), run=5009-5011msec 00:34:31.327 16:11:33 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:31.327 16:11:33 -- target/dif.sh@43 -- # local sub 00:34:31.327 16:11:33 -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.327 16:11:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:31.327 16:11:33 -- target/dif.sh@36 -- # local sub_id=0 00:34:31.327 16:11:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.327 16:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.327 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:31.327 16:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.327 16:11:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:31.327 16:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.327 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:31.327 16:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.327 16:11:33 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:31.337 16:11:33 -- target/dif.sh@109 -- # bs=4k 00:34:31.337 16:11:33 -- target/dif.sh@109 -- # numjobs=8 00:34:31.337 16:11:33 -- target/dif.sh@109 -- # iodepth=16 00:34:31.337 16:11:33 -- target/dif.sh@109 -- # runtime= 00:34:31.337 16:11:33 -- target/dif.sh@109 -- # files=2 00:34:31.337 16:11:33 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:31.337 16:11:33 -- target/dif.sh@28 -- # local sub 00:34:31.337 16:11:33 -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.337 16:11:33 -- target/dif.sh@31 -- # create_subsystem 0 00:34:31.337 16:11:33 -- target/dif.sh@18 -- # local sub_id=0 00:34:31.337 16:11:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:31.337 16:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.337 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:31.337 bdev_null0 00:34:31.337 16:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.337 16:11:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:31.337 16:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.337 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:31.337 16:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.337 16:11:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:31.337 16:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.337 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:31.337 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.337 16:11:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.337 16:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.337 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:31.337 [2024-07-22 16:11:34.010804] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.337 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.337 16:11:34 -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.337 16:11:34 -- target/dif.sh@31 -- # create_subsystem 1 00:34:31.337 16:11:34 -- target/dif.sh@18 -- # local sub_id=1 00:34:31.337 16:11:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:31.338 16:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.338 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 bdev_null1 00:34:31.338 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.338 16:11:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:31.338 16:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.338 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.338 16:11:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:31.338 16:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.338 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.338 16:11:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.338 16:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.338 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.338 16:11:34 -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.338 16:11:34 -- target/dif.sh@31 -- # create_subsystem 2 00:34:31.338 16:11:34 -- target/dif.sh@18 -- # local sub_id=2 00:34:31.338 16:11:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:31.338 16:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.338 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 bdev_null2 00:34:31.338 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.338 16:11:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:31.338 16:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.338 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.338 16:11:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:31.338 16:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.338 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.338 16:11:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:31.338 16:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.338 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:31.338 16:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.338 16:11:34 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:31.338 16:11:34 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:31.338 16:11:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:31.338 16:11:34 -- nvmf/common.sh@520 -- # config=() 00:34:31.338 16:11:34 -- nvmf/common.sh@520 -- # local subsystem config 00:34:31.338 16:11:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.338 16:11:34 -- target/dif.sh@82 -- # gen_fio_conf 00:34:31.338 16:11:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:31.338 16:11:34 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.338 16:11:34 -- target/dif.sh@54 -- # local file 00:34:31.338 16:11:34 -- target/dif.sh@56 -- # cat 00:34:31.338 16:11:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:31.338 { 00:34:31.338 "params": { 00:34:31.338 "name": "Nvme$subsystem", 00:34:31.338 "trtype": "$TEST_TRANSPORT", 00:34:31.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.338 "adrfam": "ipv4", 00:34:31.338 "trsvcid": "$NVMF_PORT", 00:34:31.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.338 "hdgst": ${hdgst:-false}, 00:34:31.338 "ddgst": ${ddgst:-false} 00:34:31.338 }, 00:34:31.338 "method": "bdev_nvme_attach_controller" 00:34:31.338 } 00:34:31.338 EOF 00:34:31.338 )") 00:34:31.338 16:11:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:31.338 16:11:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:31.338 16:11:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:31.338 16:11:34 -- nvmf/common.sh@542 -- # cat 00:34:31.338 16:11:34 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:31.338 16:11:34 -- common/autotest_common.sh@1320 -- # shift 00:34:31.338 16:11:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:31.338 16:11:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.338 16:11:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:31.338 16:11:34 -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.338 16:11:34 -- target/dif.sh@73 -- # cat 00:34:31.338 16:11:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:31.338 16:11:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:31.338 16:11:34 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:31.338 16:11:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:31.338 16:11:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:31.338 { 00:34:31.338 "params": { 00:34:31.338 "name": "Nvme$subsystem", 00:34:31.338 "trtype": "$TEST_TRANSPORT", 00:34:31.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.338 "adrfam": "ipv4", 00:34:31.338 "trsvcid": "$NVMF_PORT", 00:34:31.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.338 "hdgst": ${hdgst:-false}, 00:34:31.338 "ddgst": ${ddgst:-false} 00:34:31.338 }, 00:34:31.338 "method": "bdev_nvme_attach_controller" 00:34:31.338 } 00:34:31.338 EOF 00:34:31.338 )") 00:34:31.338 16:11:34 -- target/dif.sh@72 -- # (( file++ )) 00:34:31.338 16:11:34 -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.338 16:11:34 -- target/dif.sh@73 -- # cat 00:34:31.338 16:11:34 -- nvmf/common.sh@542 -- # cat 00:34:31.338 16:11:34 -- target/dif.sh@72 -- # (( file++ )) 00:34:31.338 16:11:34 -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.338 16:11:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:31.338 16:11:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:31.338 { 00:34:31.338 "params": { 00:34:31.338 "name": "Nvme$subsystem", 00:34:31.338 "trtype": "$TEST_TRANSPORT", 00:34:31.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.338 "adrfam": "ipv4", 00:34:31.338 "trsvcid": "$NVMF_PORT", 00:34:31.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.338 "hdgst": ${hdgst:-false}, 00:34:31.338 "ddgst": ${ddgst:-false} 00:34:31.338 }, 00:34:31.338 "method": "bdev_nvme_attach_controller" 00:34:31.338 } 00:34:31.338 EOF 00:34:31.338 )") 00:34:31.338 16:11:34 -- nvmf/common.sh@542 -- # cat 00:34:31.338 16:11:34 -- nvmf/common.sh@544 -- # jq . 00:34:31.338 16:11:34 -- nvmf/common.sh@545 -- # IFS=, 00:34:31.338 16:11:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:31.338 "params": { 00:34:31.338 "name": "Nvme0", 00:34:31.338 "trtype": "tcp", 00:34:31.338 "traddr": "10.0.0.2", 00:34:31.338 "adrfam": "ipv4", 00:34:31.338 "trsvcid": "4420", 00:34:31.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.338 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.338 "hdgst": false, 00:34:31.338 "ddgst": false 00:34:31.338 }, 00:34:31.338 "method": "bdev_nvme_attach_controller" 00:34:31.338 },{ 00:34:31.338 "params": { 00:34:31.338 "name": "Nvme1", 00:34:31.338 "trtype": "tcp", 00:34:31.338 "traddr": "10.0.0.2", 00:34:31.338 "adrfam": "ipv4", 00:34:31.338 "trsvcid": "4420", 00:34:31.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:31.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:31.338 "hdgst": false, 00:34:31.338 "ddgst": false 00:34:31.338 }, 00:34:31.338 "method": "bdev_nvme_attach_controller" 00:34:31.338 },{ 00:34:31.338 "params": { 00:34:31.338 "name": "Nvme2", 00:34:31.338 "trtype": "tcp", 00:34:31.338 "traddr": "10.0.0.2", 00:34:31.338 "adrfam": "ipv4", 00:34:31.338 "trsvcid": "4420", 00:34:31.338 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:31.339 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:31.339 "hdgst": false, 00:34:31.339 "ddgst": false 00:34:31.339 }, 00:34:31.339 "method": "bdev_nvme_attach_controller" 00:34:31.339 }' 00:34:31.339 16:11:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:31.339 16:11:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:31.339 16:11:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.339 16:11:34 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:31.339 16:11:34 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:31.339 16:11:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:31.339 16:11:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:31.339 16:11:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:31.339 16:11:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:31.339 16:11:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.597 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:31.597 ... 00:34:31.597 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:31.597 ... 00:34:31.597 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:31.597 ... 00:34:31.597 fio-3.35 00:34:31.597 Starting 24 threads 00:34:32.164 [2024-07-22 16:11:34.805837] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:32.164 [2024-07-22 16:11:34.805910] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:50.265 00:34:50.265 filename0: (groupid=0, jobs=1): err= 0: pid=74697: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=304, BW=1220KiB/s (1249kB/s)(12.0MiB/10040msec) 00:34:50.265 slat (usec): min=7, max=4119, avg=33.78, stdev=207.29 00:34:50.265 clat (msec): min=8, max=149, avg=52.20, stdev=22.16 00:34:50.265 lat (msec): min=8, max=149, avg=52.23, stdev=22.18 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 34], 00:34:50.265 | 30.00th=[ 39], 40.00th=[ 42], 50.00th=[ 48], 60.00th=[ 53], 00:34:50.265 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 85], 95.00th=[ 94], 00:34:50.265 | 99.00th=[ 113], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 150], 00:34:50.265 | 99.99th=[ 150] 00:34:50.265 bw ( KiB/s): min= 512, max= 1960, per=2.97%, avg=1217.55, stdev=418.98, samples=20 00:34:50.265 iops : min= 128, max= 490, avg=304.30, stdev=104.71, samples=20 00:34:50.265 lat (msec) : 10=0.23%, 20=1.57%, 50=57.33%, 100=37.73%, 250=3.14% 00:34:50.265 cpu : usr=41.22%, sys=2.59%, ctx=1488, majf=0, minf=9 00:34:50.265 IO depths : 1=0.3%, 2=3.8%, 4=15.4%, 8=66.4%, 16=14.2%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=92.0%, 8=4.5%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=3061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename0: (groupid=0, jobs=1): err= 0: pid=74698: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10003msec) 00:34:50.265 slat (usec): min=8, max=9047, avg=22.57, stdev=176.81 00:34:50.265 clat (msec): min=4, max=186, avg=27.91, stdev=24.23 00:34:50.265 lat (msec): min=4, max=186, avg=27.93, stdev=24.24 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 15], 00:34:50.265 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 22], 00:34:50.265 | 70.00th=[ 24], 80.00th=[ 33], 90.00th=[ 64], 95.00th=[ 88], 00:34:50.265 | 99.00th=[ 127], 99.50th=[ 140], 99.90th=[ 171], 99.95th=[ 171], 00:34:50.265 | 99.99th=[ 186] 00:34:50.265 bw ( KiB/s): min= 495, max= 4008, per=5.49%, avg=2250.53, stdev=1316.40, samples=19 00:34:50.265 iops : min= 123, max= 1002, avg=562.58, stdev=329.17, samples=19 00:34:50.265 lat (msec) : 10=1.68%, 20=50.54%, 50=36.48%, 100=8.86%, 250=2.43% 00:34:50.265 cpu : usr=62.36%, sys=4.40%, ctx=835, majf=0, minf=9 00:34:50.265 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename0: (groupid=0, jobs=1): err= 0: pid=74699: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=581, BW=2328KiB/s (2384kB/s)(22.7MiB/10002msec) 00:34:50.265 slat (usec): min=5, max=4032, avg=21.23, stdev=91.22 00:34:50.265 clat (msec): min=2, max=179, avg=27.40, stdev=23.66 00:34:50.265 lat (msec): min=2, max=179, avg=27.42, stdev=23.65 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 15], 00:34:50.265 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 20], 60.00th=[ 22], 00:34:50.265 | 70.00th=[ 24], 80.00th=[ 34], 90.00th=[ 55], 95.00th=[ 84], 00:34:50.265 | 99.00th=[ 123], 99.50th=[ 148], 99.90th=[ 169], 99.95th=[ 169], 00:34:50.265 | 99.99th=[ 180] 00:34:50.265 bw ( KiB/s): min= 384, max= 4032, per=5.61%, avg=2303.68, stdev=1302.82, samples=19 00:34:50.265 iops : min= 96, max= 1008, avg=575.79, stdev=325.69, samples=19 00:34:50.265 lat (msec) : 4=0.33%, 10=1.37%, 20=50.52%, 50=37.43%, 100=7.49% 00:34:50.265 lat (msec) : 250=2.85% 00:34:50.265 cpu : usr=64.09%, sys=4.25%, ctx=1193, majf=0, minf=9 00:34:50.265 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=5821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename0: (groupid=0, jobs=1): err= 0: pid=74700: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=537, BW=2149KiB/s (2201kB/s)(21.0MiB/10012msec) 00:34:50.265 slat (usec): min=4, max=8024, avg=21.97, stdev=154.60 00:34:50.265 clat (msec): min=9, max=165, avg=29.67, stdev=22.81 00:34:50.265 lat (msec): min=9, max=165, avg=29.70, stdev=22.81 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:34:50.265 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 23], 00:34:50.265 | 70.00th=[ 26], 80.00th=[ 39], 90.00th=[ 71], 95.00th=[ 85], 00:34:50.265 | 99.00th=[ 117], 99.50th=[ 122], 99.90th=[ 133], 99.95th=[ 140], 00:34:50.265 | 99.99th=[ 167] 00:34:50.265 bw ( KiB/s): min= 624, max= 3528, per=5.23%, avg=2147.25, stdev=1109.06, samples=20 00:34:50.265 iops : min= 156, max= 882, avg=536.75, stdev=277.26, samples=20 00:34:50.265 lat (msec) : 10=0.43%, 20=39.65%, 50=48.69%, 100=8.64%, 250=2.58% 00:34:50.265 cpu : usr=53.10%, sys=3.75%, ctx=855, majf=0, minf=9 00:34:50.265 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=78.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=5379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename0: (groupid=0, jobs=1): err= 0: pid=74701: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=311, BW=1246KiB/s (1276kB/s)(12.2MiB/10039msec) 00:34:50.265 slat (usec): min=4, max=8059, avg=23.15, stdev=225.32 00:34:50.265 clat (msec): min=7, max=133, avg=51.17, stdev=22.58 00:34:50.265 lat (msec): min=7, max=133, avg=51.19, stdev=22.58 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 35], 00:34:50.265 | 30.00th=[ 36], 40.00th=[ 40], 50.00th=[ 47], 60.00th=[ 50], 00:34:50.265 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 84], 95.00th=[ 99], 00:34:50.265 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 134], 00:34:50.265 | 99.99th=[ 134] 00:34:50.265 bw ( KiB/s): min= 640, max= 2036, per=3.04%, avg=1246.15, stdev=440.35, samples=20 00:34:50.265 iops : min= 160, max= 509, avg=311.45, stdev=110.06, samples=20 00:34:50.265 lat (msec) : 10=0.06%, 20=1.76%, 50=60.28%, 100=33.80%, 250=4.09% 00:34:50.265 cpu : usr=32.34%, sys=2.64%, ctx=1218, majf=0, minf=9 00:34:50.265 IO depths : 1=0.3%, 2=3.7%, 4=14.6%, 8=67.0%, 16=14.4%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=91.8%, 8=4.8%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=3127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename0: (groupid=0, jobs=1): err= 0: pid=74702: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=317, BW=1271KiB/s (1301kB/s)(12.4MiB/10030msec) 00:34:50.265 slat (usec): min=8, max=8105, avg=24.48, stdev=175.41 00:34:50.265 clat (msec): min=4, max=131, avg=50.20, stdev=23.02 00:34:50.265 lat (msec): min=4, max=131, avg=50.22, stdev=23.02 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 12], 5.00th=[ 23], 10.00th=[ 25], 20.00th=[ 33], 00:34:50.265 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 46], 60.00th=[ 49], 00:34:50.265 | 70.00th=[ 60], 80.00th=[ 68], 90.00th=[ 88], 95.00th=[ 96], 00:34:50.265 | 99.00th=[ 115], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 131], 00:34:50.265 | 99.99th=[ 132] 00:34:50.265 bw ( KiB/s): min= 640, max= 2187, per=3.09%, avg=1269.45, stdev=489.09, samples=20 00:34:50.265 iops : min= 160, max= 546, avg=317.30, stdev=122.22, samples=20 00:34:50.265 lat (msec) : 10=0.78%, 20=3.17%, 50=58.47%, 100=33.49%, 250=4.08% 00:34:50.265 cpu : usr=40.62%, sys=3.61%, ctx=1290, majf=0, minf=9 00:34:50.265 IO depths : 1=0.2%, 2=4.0%, 4=16.2%, 8=65.8%, 16=13.8%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=92.0%, 8=4.4%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=3186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename0: (groupid=0, jobs=1): err= 0: pid=74703: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=549, BW=2198KiB/s (2251kB/s)(21.5MiB/10003msec) 00:34:50.265 slat (usec): min=3, max=8049, avg=26.30, stdev=216.70 00:34:50.265 clat (msec): min=8, max=183, avg=29.00, stdev=22.61 00:34:50.265 lat (msec): min=8, max=183, avg=29.02, stdev=22.61 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 16], 00:34:50.265 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 22], 60.00th=[ 24], 00:34:50.265 | 70.00th=[ 25], 80.00th=[ 36], 90.00th=[ 61], 95.00th=[ 85], 00:34:50.265 | 99.00th=[ 116], 99.50th=[ 125], 99.90th=[ 153], 99.95th=[ 157], 00:34:50.265 | 99.99th=[ 184] 00:34:50.265 bw ( KiB/s): min= 496, max= 3568, per=5.31%, avg=2176.42, stdev=1150.13, samples=19 00:34:50.265 iops : min= 124, max= 892, avg=544.05, stdev=287.59, samples=19 00:34:50.265 lat (msec) : 10=0.69%, 20=46.04%, 50=42.37%, 100=8.80%, 250=2.09% 00:34:50.265 cpu : usr=41.77%, sys=2.84%, ctx=1189, majf=0, minf=9 00:34:50.265 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=5497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename0: (groupid=0, jobs=1): err= 0: pid=74704: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=587, BW=2349KiB/s (2405kB/s)(22.9MiB/10001msec) 00:34:50.265 slat (usec): min=4, max=4047, avg=20.39, stdev=91.11 00:34:50.265 clat (usec): min=1495, max=179769, avg=27146.90, stdev=22533.56 00:34:50.265 lat (usec): min=1506, max=179796, avg=27167.29, stdev=22536.35 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 15], 00:34:50.265 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 22], 00:34:50.265 | 70.00th=[ 24], 80.00th=[ 34], 90.00th=[ 67], 95.00th=[ 84], 00:34:50.265 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 159], 99.95th=[ 159], 00:34:50.265 | 99.99th=[ 180] 00:34:50.265 bw ( KiB/s): min= 512, max= 4000, per=5.61%, avg=2303.74, stdev=1269.65, samples=19 00:34:50.265 iops : min= 128, max= 1000, avg=575.89, stdev=317.38, samples=19 00:34:50.265 lat (msec) : 2=0.51%, 4=0.63%, 10=1.53%, 20=49.11%, 50=37.34% 00:34:50.265 lat (msec) : 100=9.14%, 250=1.74% 00:34:50.265 cpu : usr=69.78%, sys=4.66%, ctx=798, majf=0, minf=9 00:34:50.265 IO depths : 1=0.1%, 2=1.0%, 4=4.2%, 8=79.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=87.9%, 8=11.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=5873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename1: (groupid=0, jobs=1): err= 0: pid=74705: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=543, BW=2173KiB/s (2225kB/s)(21.2MiB/10010msec) 00:34:50.265 slat (usec): min=8, max=4057, avg=21.86, stdev=157.16 00:34:50.265 clat (msec): min=8, max=158, avg=29.35, stdev=22.75 00:34:50.265 lat (msec): min=8, max=158, avg=29.38, stdev=22.75 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 16], 00:34:50.265 | 30.00th=[ 17], 40.00th=[ 19], 50.00th=[ 22], 60.00th=[ 24], 00:34:50.265 | 70.00th=[ 25], 80.00th=[ 39], 90.00th=[ 65], 95.00th=[ 83], 00:34:50.265 | 99.00th=[ 116], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 150], 00:34:50.265 | 99.99th=[ 159] 00:34:50.265 bw ( KiB/s): min= 625, max= 3536, per=5.29%, avg=2171.10, stdev=1113.45, samples=20 00:34:50.265 iops : min= 156, max= 884, avg=542.75, stdev=278.39, samples=20 00:34:50.265 lat (msec) : 10=0.50%, 20=45.74%, 50=41.97%, 100=9.97%, 250=1.82% 00:34:50.265 cpu : usr=47.04%, sys=3.15%, ctx=1279, majf=0, minf=9 00:34:50.265 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=80.8%, 16=16.8%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=88.3%, 8=11.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=5437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename1: (groupid=0, jobs=1): err= 0: pid=74706: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=325, BW=1302KiB/s (1333kB/s)(12.8MiB/10047msec) 00:34:50.265 slat (usec): min=3, max=8065, avg=17.74, stdev=140.98 00:34:50.265 clat (msec): min=10, max=119, avg=48.98, stdev=20.58 00:34:50.265 lat (msec): min=10, max=119, avg=49.00, stdev=20.58 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 35], 00:34:50.265 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 47], 60.00th=[ 48], 00:34:50.265 | 70.00th=[ 58], 80.00th=[ 69], 90.00th=[ 81], 95.00th=[ 92], 00:34:50.265 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 120], 99.95th=[ 121], 00:34:50.265 | 99.99th=[ 121] 00:34:50.265 bw ( KiB/s): min= 768, max= 2032, per=3.17%, avg=1301.50, stdev=415.04, samples=20 00:34:50.265 iops : min= 192, max= 508, avg=325.35, stdev=103.78, samples=20 00:34:50.265 lat (msec) : 20=1.99%, 50=63.49%, 100=32.11%, 250=2.42% 00:34:50.265 cpu : usr=30.89%, sys=2.73%, ctx=909, majf=0, minf=9 00:34:50.265 IO depths : 1=0.2%, 2=1.7%, 4=6.8%, 8=75.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:34:50.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 complete : 0=0.0%, 4=89.8%, 8=8.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.265 issued rwts: total=3270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.265 filename1: (groupid=0, jobs=1): err= 0: pid=74707: Mon Jul 22 16:11:51 2024 00:34:50.265 read: IOPS=318, BW=1276KiB/s (1306kB/s)(12.5MiB/10047msec) 00:34:50.265 slat (usec): min=7, max=9047, avg=21.45, stdev=182.45 00:34:50.265 clat (msec): min=8, max=126, avg=50.00, stdev=22.14 00:34:50.265 lat (msec): min=8, max=126, avg=50.02, stdev=22.14 00:34:50.265 clat percentiles (msec): 00:34:50.265 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 26], 20.00th=[ 32], 00:34:50.265 | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 46], 60.00th=[ 52], 00:34:50.266 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 93], 00:34:50.266 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 127], 00:34:50.266 | 99.99th=[ 127] 00:34:50.266 bw ( KiB/s): min= 760, max= 2160, per=3.11%, avg=1275.10, stdev=477.81, samples=20 00:34:50.266 iops : min= 190, max= 540, avg=318.75, stdev=119.47, samples=20 00:34:50.266 lat (msec) : 10=0.75%, 20=3.59%, 50=54.21%, 100=39.08%, 250=2.37% 00:34:50.266 cpu : usr=39.12%, sys=3.42%, ctx=1219, majf=0, minf=9 00:34:50.266 IO depths : 1=0.2%, 2=3.6%, 4=14.0%, 8=67.8%, 16=14.4%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=91.6%, 8=5.2%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=3204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename1: (groupid=0, jobs=1): err= 0: pid=74708: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=302, BW=1211KiB/s (1240kB/s)(11.9MiB/10048msec) 00:34:50.266 slat (usec): min=4, max=8149, avg=61.86, stdev=395.34 00:34:50.266 clat (msec): min=7, max=137, avg=52.42, stdev=21.71 00:34:50.266 lat (msec): min=7, max=137, avg=52.48, stdev=21.69 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 36], 00:34:50.266 | 30.00th=[ 38], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 53], 00:34:50.266 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 96], 00:34:50.266 | 99.00th=[ 112], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 138], 00:34:50.266 | 99.99th=[ 138] 00:34:50.266 bw ( KiB/s): min= 656, max= 1976, per=2.95%, avg=1210.85, stdev=404.90, samples=20 00:34:50.266 iops : min= 164, max= 494, avg=302.65, stdev=101.17, samples=20 00:34:50.266 lat (msec) : 10=1.05%, 20=1.61%, 50=55.41%, 100=37.63%, 250=4.30% 00:34:50.266 cpu : usr=36.21%, sys=2.19%, ctx=1145, majf=0, minf=9 00:34:50.266 IO depths : 1=0.4%, 2=3.1%, 4=12.5%, 8=69.3%, 16=14.7%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=91.3%, 8=5.8%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=3043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename1: (groupid=0, jobs=1): err= 0: pid=74709: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=405, BW=1621KiB/s (1660kB/s)(15.9MiB/10025msec) 00:34:50.266 slat (usec): min=8, max=4123, avg=32.75, stdev=188.85 00:34:50.266 clat (msec): min=9, max=159, avg=39.30, stdev=24.02 00:34:50.266 lat (msec): min=9, max=159, avg=39.33, stdev=24.03 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 16], 20.00th=[ 18], 00:34:50.266 | 30.00th=[ 23], 40.00th=[ 28], 50.00th=[ 36], 60.00th=[ 40], 00:34:50.266 | 70.00th=[ 48], 80.00th=[ 59], 90.00th=[ 74], 95.00th=[ 88], 00:34:50.266 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:34:50.266 | 99.99th=[ 161] 00:34:50.266 bw ( KiB/s): min= 766, max= 3928, per=3.88%, avg=1591.89, stdev=921.15, samples=19 00:34:50.266 iops : min= 191, max= 982, avg=397.89, stdev=230.29, samples=19 00:34:50.266 lat (msec) : 10=0.42%, 20=26.70%, 50=49.03%, 100=21.51%, 250=2.34% 00:34:50.266 cpu : usr=51.56%, sys=3.05%, ctx=1064, majf=0, minf=9 00:34:50.266 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=4063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename1: (groupid=0, jobs=1): err= 0: pid=74710: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=313, BW=1254KiB/s (1284kB/s)(12.3MiB/10024msec) 00:34:50.266 slat (usec): min=4, max=8135, avg=86.46, stdev=552.46 00:34:50.266 clat (msec): min=10, max=127, avg=50.54, stdev=19.75 00:34:50.266 lat (msec): min=10, max=127, avg=50.62, stdev=19.75 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 17], 5.00th=[ 24], 10.00th=[ 30], 20.00th=[ 35], 00:34:50.266 | 30.00th=[ 37], 40.00th=[ 42], 50.00th=[ 48], 60.00th=[ 51], 00:34:50.266 | 70.00th=[ 59], 80.00th=[ 64], 90.00th=[ 81], 95.00th=[ 90], 00:34:50.266 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:34:50.266 | 99.99th=[ 128] 00:34:50.266 bw ( KiB/s): min= 808, max= 1960, per=3.05%, avg=1252.15, stdev=372.93, samples=20 00:34:50.266 iops : min= 202, max= 490, avg=313.00, stdev=93.23, samples=20 00:34:50.266 lat (msec) : 20=1.62%, 50=58.59%, 100=37.68%, 250=2.10% 00:34:50.266 cpu : usr=36.36%, sys=2.59%, ctx=1133, majf=0, minf=9 00:34:50.266 IO depths : 1=0.3%, 2=2.3%, 4=9.8%, 8=72.6%, 16=15.1%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=90.4%, 8=7.2%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=3142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename1: (groupid=0, jobs=1): err= 0: pid=74711: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=298, BW=1196KiB/s (1224kB/s)(11.7MiB/10046msec) 00:34:50.266 slat (usec): min=3, max=8162, avg=91.92, stdev=624.25 00:34:50.266 clat (msec): min=9, max=132, avg=52.89, stdev=21.84 00:34:50.266 lat (msec): min=9, max=140, avg=52.98, stdev=21.85 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 36], 00:34:50.266 | 30.00th=[ 37], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 58], 00:34:50.266 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:34:50.266 | 99.00th=[ 112], 99.50th=[ 127], 99.90th=[ 131], 99.95th=[ 133], 00:34:50.266 | 99.99th=[ 133] 00:34:50.266 bw ( KiB/s): min= 744, max= 2036, per=2.91%, avg=1194.90, stdev=406.37, samples=20 00:34:50.266 iops : min= 186, max= 509, avg=298.70, stdev=101.61, samples=20 00:34:50.266 lat (msec) : 10=0.53%, 20=1.76%, 50=54.85%, 100=39.76%, 250=3.10% 00:34:50.266 cpu : usr=34.45%, sys=2.05%, ctx=977, majf=0, minf=9 00:34:50.266 IO depths : 1=0.4%, 2=2.9%, 4=11.5%, 8=70.2%, 16=15.0%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=91.1%, 8=6.2%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=3003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename1: (groupid=0, jobs=1): err= 0: pid=74712: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=300, BW=1203KiB/s (1232kB/s)(11.8MiB/10029msec) 00:34:50.266 slat (usec): min=7, max=8050, avg=28.24, stdev=326.67 00:34:50.266 clat (msec): min=10, max=152, avg=52.97, stdev=23.16 00:34:50.266 lat (msec): min=10, max=152, avg=53.00, stdev=23.17 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 36], 00:34:50.266 | 30.00th=[ 36], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 54], 00:34:50.266 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 99], 00:34:50.266 | 99.00th=[ 131], 99.50th=[ 140], 99.90th=[ 153], 99.95th=[ 153], 00:34:50.266 | 99.99th=[ 153] 00:34:50.266 bw ( KiB/s): min= 544, max= 1888, per=2.93%, avg=1201.60, stdev=411.57, samples=20 00:34:50.266 iops : min= 136, max= 472, avg=300.35, stdev=102.88, samples=20 00:34:50.266 lat (msec) : 20=1.09%, 50=57.23%, 100=37.23%, 250=4.44% 00:34:50.266 cpu : usr=30.52%, sys=3.05%, ctx=911, majf=0, minf=9 00:34:50.266 IO depths : 1=0.4%, 2=3.4%, 4=13.4%, 8=68.6%, 16=14.2%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=91.3%, 8=5.6%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=3016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename2: (groupid=0, jobs=1): err= 0: pid=74713: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=303, BW=1213KiB/s (1242kB/s)(11.9MiB/10029msec) 00:34:50.266 slat (usec): min=8, max=12128, avg=57.62, stdev=442.21 00:34:50.266 clat (msec): min=11, max=132, avg=52.45, stdev=21.17 00:34:50.266 lat (msec): min=11, max=132, avg=52.51, stdev=21.17 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 18], 5.00th=[ 25], 10.00th=[ 31], 20.00th=[ 35], 00:34:50.266 | 30.00th=[ 37], 40.00th=[ 43], 50.00th=[ 48], 60.00th=[ 56], 00:34:50.266 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 93], 00:34:50.266 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:34:50.266 | 99.99th=[ 133] 00:34:50.266 bw ( KiB/s): min= 704, max= 1844, per=2.95%, avg=1211.50, stdev=395.54, samples=20 00:34:50.266 iops : min= 176, max= 461, avg=302.85, stdev=98.90, samples=20 00:34:50.266 lat (msec) : 20=1.12%, 50=55.08%, 100=41.43%, 250=2.37% 00:34:50.266 cpu : usr=36.43%, sys=2.09%, ctx=1010, majf=0, minf=9 00:34:50.266 IO depths : 1=0.4%, 2=2.9%, 4=11.0%, 8=70.9%, 16=14.9%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=90.8%, 8=6.6%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=3041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename2: (groupid=0, jobs=1): err= 0: pid=74714: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=575, BW=2300KiB/s (2356kB/s)(22.5MiB/10009msec) 00:34:50.266 slat (usec): min=8, max=8048, avg=23.34, stdev=161.12 00:34:50.266 clat (msec): min=8, max=159, avg=27.72, stdev=21.92 00:34:50.266 lat (msec): min=8, max=159, avg=27.74, stdev=21.93 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 15], 00:34:50.266 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 23], 00:34:50.266 | 70.00th=[ 25], 80.00th=[ 35], 90.00th=[ 56], 95.00th=[ 86], 00:34:50.266 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 132], 99.95th=[ 142], 00:34:50.266 | 99.99th=[ 161] 00:34:50.266 bw ( KiB/s): min= 624, max= 3848, per=5.55%, avg=2277.37, stdev=1235.54, samples=19 00:34:50.266 iops : min= 156, max= 962, avg=569.32, stdev=308.91, samples=19 00:34:50.266 lat (msec) : 10=1.09%, 20=48.26%, 50=39.66%, 100=9.05%, 250=1.93% 00:34:50.266 cpu : usr=63.90%, sys=4.43%, ctx=824, majf=0, minf=9 00:34:50.266 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=5756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename2: (groupid=0, jobs=1): err= 0: pid=74715: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=548, BW=2195KiB/s (2247kB/s)(21.5MiB/10024msec) 00:34:50.266 slat (usec): min=8, max=8038, avg=24.74, stdev=195.62 00:34:50.266 clat (msec): min=8, max=166, avg=29.04, stdev=22.89 00:34:50.266 lat (msec): min=8, max=166, avg=29.07, stdev=22.89 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 15], 20.00th=[ 16], 00:34:50.266 | 30.00th=[ 17], 40.00th=[ 19], 50.00th=[ 22], 60.00th=[ 24], 00:34:50.266 | 70.00th=[ 25], 80.00th=[ 36], 90.00th=[ 61], 95.00th=[ 85], 00:34:50.266 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 144], 00:34:50.266 | 99.99th=[ 167] 00:34:50.266 bw ( KiB/s): min= 513, max= 4152, per=5.31%, avg=2180.89, stdev=1194.50, samples=19 00:34:50.266 iops : min= 128, max= 1038, avg=545.16, stdev=298.65, samples=19 00:34:50.266 lat (msec) : 10=0.42%, 20=45.98%, 50=42.58%, 100=8.56%, 250=2.45% 00:34:50.266 cpu : usr=51.52%, sys=3.39%, ctx=933, majf=0, minf=9 00:34:50.266 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=81.7%, 16=16.6%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=5500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename2: (groupid=0, jobs=1): err= 0: pid=74716: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=293, BW=1173KiB/s (1201kB/s)(11.5MiB/10046msec) 00:34:50.266 slat (usec): min=3, max=8181, avg=94.01, stdev=632.46 00:34:50.266 clat (msec): min=12, max=132, avg=53.92, stdev=21.02 00:34:50.266 lat (msec): min=12, max=132, avg=54.01, stdev=21.00 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 36], 00:34:50.266 | 30.00th=[ 38], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 58], 00:34:50.266 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 96], 00:34:50.266 | 99.00th=[ 112], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 133], 00:34:50.266 | 99.99th=[ 133] 00:34:50.266 bw ( KiB/s): min= 640, max= 1760, per=2.86%, avg=1173.70, stdev=342.73, samples=20 00:34:50.266 iops : min= 160, max= 440, avg=293.40, stdev=85.70, samples=20 00:34:50.266 lat (msec) : 20=0.65%, 50=55.55%, 100=41.46%, 250=2.34% 00:34:50.266 cpu : usr=33.34%, sys=2.03%, ctx=1001, majf=0, minf=9 00:34:50.266 IO depths : 1=0.5%, 2=3.7%, 4=15.1%, 8=66.8%, 16=14.0%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=91.9%, 8=4.5%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=2945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename2: (groupid=0, jobs=1): err= 0: pid=74717: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=311, BW=1244KiB/s (1274kB/s)(12.2MiB/10041msec) 00:34:50.266 slat (usec): min=4, max=10113, avg=74.47, stdev=491.18 00:34:50.266 clat (msec): min=10, max=120, avg=50.98, stdev=20.61 00:34:50.266 lat (msec): min=10, max=120, avg=51.06, stdev=20.59 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 36], 00:34:50.266 | 30.00th=[ 36], 40.00th=[ 42], 50.00th=[ 48], 60.00th=[ 51], 00:34:50.266 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 91], 00:34:50.266 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 121], 00:34:50.266 | 99.99th=[ 121] 00:34:50.266 bw ( KiB/s): min= 768, max= 1976, per=3.03%, avg=1244.25, stdev=404.65, samples=20 00:34:50.266 iops : min= 192, max= 494, avg=311.00, stdev=101.14, samples=20 00:34:50.266 lat (msec) : 20=1.66%, 50=57.97%, 100=38.00%, 250=2.37% 00:34:50.266 cpu : usr=35.77%, sys=2.13%, ctx=996, majf=0, minf=9 00:34:50.266 IO depths : 1=0.2%, 2=2.4%, 4=10.5%, 8=71.7%, 16=15.2%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=90.8%, 8=6.7%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=3124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename2: (groupid=0, jobs=1): err= 0: pid=74718: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=584, BW=2337KiB/s (2393kB/s)(22.8MiB/10009msec) 00:34:50.266 slat (usec): min=4, max=8023, avg=22.51, stdev=139.00 00:34:50.266 clat (msec): min=8, max=166, avg=27.29, stdev=21.95 00:34:50.266 lat (msec): min=8, max=166, avg=27.31, stdev=21.95 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 15], 00:34:50.266 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 21], 60.00th=[ 22], 00:34:50.266 | 70.00th=[ 24], 80.00th=[ 34], 90.00th=[ 57], 95.00th=[ 85], 00:34:50.266 | 99.00th=[ 108], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 132], 00:34:50.266 | 99.99th=[ 167] 00:34:50.266 bw ( KiB/s): min= 624, max= 4096, per=5.69%, avg=2335.05, stdev=1250.07, samples=20 00:34:50.266 iops : min= 156, max= 1024, avg=583.75, stdev=312.53, samples=20 00:34:50.266 lat (msec) : 10=0.97%, 20=49.24%, 50=39.49%, 100=8.36%, 250=1.93% 00:34:50.266 cpu : usr=66.71%, sys=4.46%, ctx=923, majf=0, minf=9 00:34:50.266 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=5847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename2: (groupid=0, jobs=1): err= 0: pid=74719: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=558, BW=2233KiB/s (2287kB/s)(21.8MiB/10006msec) 00:34:50.266 slat (usec): min=5, max=8055, avg=22.36, stdev=161.28 00:34:50.266 clat (msec): min=8, max=159, avg=28.53, stdev=23.47 00:34:50.266 lat (msec): min=8, max=159, avg=28.56, stdev=23.48 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 16], 00:34:50.266 | 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 22], 00:34:50.266 | 70.00th=[ 24], 80.00th=[ 35], 90.00th=[ 70], 95.00th=[ 87], 00:34:50.266 | 99.00th=[ 112], 99.50th=[ 122], 99.90th=[ 146], 99.95th=[ 157], 00:34:50.266 | 99.99th=[ 159] 00:34:50.266 bw ( KiB/s): min= 624, max= 3704, per=5.38%, avg=2207.05, stdev=1241.25, samples=19 00:34:50.266 iops : min= 156, max= 926, avg=551.74, stdev=310.34, samples=19 00:34:50.266 lat (msec) : 10=1.52%, 20=46.38%, 50=40.58%, 100=9.31%, 250=2.22% 00:34:50.266 cpu : usr=51.20%, sys=3.59%, ctx=921, majf=0, minf=9 00:34:50.266 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=5587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 filename2: (groupid=0, jobs=1): err= 0: pid=74720: Mon Jul 22 16:11:51 2024 00:34:50.266 read: IOPS=539, BW=2158KiB/s (2210kB/s)(21.1MiB/10008msec) 00:34:50.266 slat (usec): min=3, max=8055, avg=19.52, stdev=172.90 00:34:50.266 clat (msec): min=4, max=173, avg=29.56, stdev=22.18 00:34:50.266 lat (msec): min=4, max=173, avg=29.58, stdev=22.18 00:34:50.266 clat percentiles (msec): 00:34:50.266 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 16], 00:34:50.266 | 30.00th=[ 17], 40.00th=[ 19], 50.00th=[ 22], 60.00th=[ 24], 00:34:50.266 | 70.00th=[ 28], 80.00th=[ 40], 90.00th=[ 61], 95.00th=[ 85], 00:34:50.266 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:34:50.266 | 99.99th=[ 174] 00:34:50.266 bw ( KiB/s): min= 512, max= 3568, per=5.19%, avg=2130.53, stdev=1121.49, samples=19 00:34:50.266 iops : min= 128, max= 892, avg=532.63, stdev=280.37, samples=19 00:34:50.266 lat (msec) : 10=1.04%, 20=44.19%, 50=43.35%, 100=9.63%, 250=1.80% 00:34:50.266 cpu : usr=42.16%, sys=2.74%, ctx=1223, majf=0, minf=9 00:34:50.266 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.2%, 16=17.3%, 32=0.0%, >=64=0.0% 00:34:50.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 complete : 0=0.0%, 4=88.4%, 8=11.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.266 issued rwts: total=5400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:50.266 00:34:50.266 Run status group 0 (all jobs): 00:34:50.266 READ: bw=40.1MiB/s (42.0MB/s), 1173KiB/s-2349KiB/s (1201kB/s-2405kB/s), io=402MiB (422MB), run=10001-10048msec 00:34:50.267 16:11:51 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:50.267 16:11:51 -- target/dif.sh@43 -- # local sub 00:34:50.267 16:11:51 -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.267 16:11:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:50.267 16:11:51 -- target/dif.sh@36 -- # local sub_id=0 00:34:50.267 16:11:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.267 16:11:51 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:50.267 16:11:51 -- target/dif.sh@36 -- # local sub_id=1 00:34:50.267 16:11:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.267 16:11:51 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:50.267 16:11:51 -- target/dif.sh@36 -- # local sub_id=2 00:34:50.267 16:11:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:50.267 16:11:51 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:50.267 16:11:51 -- target/dif.sh@115 -- # numjobs=2 00:34:50.267 16:11:51 -- target/dif.sh@115 -- # iodepth=8 00:34:50.267 16:11:51 -- target/dif.sh@115 -- # runtime=5 00:34:50.267 16:11:51 -- target/dif.sh@115 -- # files=1 00:34:50.267 16:11:51 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:50.267 16:11:51 -- target/dif.sh@28 -- # local sub 00:34:50.267 16:11:51 -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.267 16:11:51 -- target/dif.sh@31 -- # create_subsystem 0 00:34:50.267 16:11:51 -- target/dif.sh@18 -- # local sub_id=0 00:34:50.267 16:11:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 bdev_null0 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 [2024-07-22 16:11:51.815136] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.267 16:11:51 -- target/dif.sh@31 -- # create_subsystem 1 00:34:50.267 16:11:51 -- target/dif.sh@18 -- # local sub_id=1 00:34:50.267 16:11:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 bdev_null1 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:50.267 16:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:50.267 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:50.267 16:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:50.267 16:11:51 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:50.267 16:11:51 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:50.267 16:11:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:50.267 16:11:51 -- nvmf/common.sh@520 -- # config=() 00:34:50.267 16:11:51 -- nvmf/common.sh@520 -- # local subsystem config 00:34:50.267 16:11:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:50.267 16:11:51 -- target/dif.sh@82 -- # gen_fio_conf 00:34:50.267 16:11:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:50.267 { 00:34:50.267 "params": { 00:34:50.267 "name": "Nvme$subsystem", 00:34:50.267 "trtype": "$TEST_TRANSPORT", 00:34:50.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.267 "adrfam": "ipv4", 00:34:50.267 "trsvcid": "$NVMF_PORT", 00:34:50.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.267 "hdgst": ${hdgst:-false}, 00:34:50.267 "ddgst": ${ddgst:-false} 00:34:50.267 }, 00:34:50.267 "method": "bdev_nvme_attach_controller" 00:34:50.267 } 00:34:50.267 EOF 00:34:50.267 )") 00:34:50.267 16:11:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.267 16:11:51 -- target/dif.sh@54 -- # local file 00:34:50.267 16:11:51 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.267 16:11:51 -- target/dif.sh@56 -- # cat 00:34:50.267 16:11:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:50.267 16:11:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:50.267 16:11:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:50.267 16:11:51 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:50.267 16:11:51 -- common/autotest_common.sh@1320 -- # shift 00:34:50.267 16:11:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:50.267 16:11:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.267 16:11:51 -- nvmf/common.sh@542 -- # cat 00:34:50.267 16:11:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:50.267 16:11:51 -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.267 16:11:51 -- target/dif.sh@73 -- # cat 00:34:50.267 16:11:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:50.267 16:11:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:50.267 16:11:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:50.267 16:11:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:50.267 16:11:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:50.267 { 00:34:50.267 "params": { 00:34:50.267 "name": "Nvme$subsystem", 00:34:50.267 "trtype": "$TEST_TRANSPORT", 00:34:50.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.267 "adrfam": "ipv4", 00:34:50.267 "trsvcid": "$NVMF_PORT", 00:34:50.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.267 "hdgst": ${hdgst:-false}, 00:34:50.267 "ddgst": ${ddgst:-false} 00:34:50.267 }, 00:34:50.267 "method": "bdev_nvme_attach_controller" 00:34:50.267 } 00:34:50.267 EOF 00:34:50.267 )") 00:34:50.267 16:11:51 -- target/dif.sh@72 -- # (( file++ )) 00:34:50.267 16:11:51 -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.267 16:11:51 -- nvmf/common.sh@542 -- # cat 00:34:50.267 16:11:51 -- nvmf/common.sh@544 -- # jq . 00:34:50.267 16:11:51 -- nvmf/common.sh@545 -- # IFS=, 00:34:50.267 16:11:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:50.267 "params": { 00:34:50.267 "name": "Nvme0", 00:34:50.267 "trtype": "tcp", 00:34:50.267 "traddr": "10.0.0.2", 00:34:50.267 "adrfam": "ipv4", 00:34:50.267 "trsvcid": "4420", 00:34:50.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:50.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:50.267 "hdgst": false, 00:34:50.267 "ddgst": false 00:34:50.267 }, 00:34:50.267 "method": "bdev_nvme_attach_controller" 00:34:50.267 },{ 00:34:50.267 "params": { 00:34:50.267 "name": "Nvme1", 00:34:50.267 "trtype": "tcp", 00:34:50.267 "traddr": "10.0.0.2", 00:34:50.267 "adrfam": "ipv4", 00:34:50.267 "trsvcid": "4420", 00:34:50.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:50.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:50.267 "hdgst": false, 00:34:50.267 "ddgst": false 00:34:50.267 }, 00:34:50.267 "method": "bdev_nvme_attach_controller" 00:34:50.267 }' 00:34:50.267 16:11:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:50.267 16:11:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:50.267 16:11:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.267 16:11:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:50.267 16:11:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:50.267 16:11:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:50.267 16:11:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:50.267 16:11:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:50.267 16:11:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:50.267 16:11:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.267 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:50.267 ... 00:34:50.267 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:50.267 ... 00:34:50.267 fio-3.35 00:34:50.267 Starting 4 threads 00:34:50.267 [2024-07-22 16:11:52.455314] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:50.267 [2024-07-22 16:11:52.455434] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:55.551 00:34:55.551 filename0: (groupid=0, jobs=1): err= 0: pid=74913: Mon Jul 22 16:11:57 2024 00:34:55.551 read: IOPS=2042, BW=16.0MiB/s (16.7MB/s)(79.8MiB/5001msec) 00:34:55.551 slat (nsec): min=6257, max=44614, avg=13279.55, stdev=4756.48 00:34:55.551 clat (usec): min=998, max=7787, avg=3881.45, stdev=1061.15 00:34:55.551 lat (usec): min=1010, max=7825, avg=3894.73, stdev=1060.91 00:34:55.551 clat percentiles (usec): 00:34:55.551 | 1.00th=[ 1942], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2606], 00:34:55.551 | 30.00th=[ 3032], 40.00th=[ 3752], 50.00th=[ 3884], 60.00th=[ 4621], 00:34:55.551 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5211], 00:34:55.551 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 6456], 99.95th=[ 6587], 00:34:55.551 | 99.99th=[ 7504] 00:34:55.551 bw ( KiB/s): min=15088, max=18016, per=25.27%, avg=16344.89, stdev=906.69, samples=9 00:34:55.551 iops : min= 1886, max= 2252, avg=2043.11, stdev=113.34, samples=9 00:34:55.551 lat (usec) : 1000=0.01% 00:34:55.551 lat (msec) : 2=1.34%, 4=50.95%, 10=47.70% 00:34:55.551 cpu : usr=91.00%, sys=7.94%, ctx=9, majf=0, minf=9 00:34:55.551 IO depths : 1=0.1%, 2=3.5%, 4=62.0%, 8=34.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.551 complete : 0=0.0%, 4=98.7%, 8=1.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.551 issued rwts: total=10216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.551 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:55.551 filename0: (groupid=0, jobs=1): err= 0: pid=74914: Mon Jul 22 16:11:57 2024 00:34:55.551 read: IOPS=2121, BW=16.6MiB/s (17.4MB/s)(82.9MiB/5001msec) 00:34:55.551 slat (nsec): min=5756, max=83428, avg=15911.01, stdev=4603.21 00:34:55.551 clat (usec): min=784, max=7027, avg=3731.68, stdev=1067.76 00:34:55.551 lat (usec): min=792, max=7049, avg=3747.59, stdev=1066.96 00:34:55.551 clat percentiles (usec): 00:34:55.551 | 1.00th=[ 1860], 5.00th=[ 2114], 10.00th=[ 2376], 20.00th=[ 2606], 00:34:55.551 | 30.00th=[ 2769], 40.00th=[ 3589], 50.00th=[ 3818], 60.00th=[ 4146], 00:34:55.551 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5080], 00:34:55.551 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 6456], 99.95th=[ 6521], 00:34:55.551 | 99.99th=[ 6652] 00:34:55.551 bw ( KiB/s): min=15120, max=18064, per=26.34%, avg=17032.89, stdev=1105.34, samples=9 00:34:55.551 iops : min= 1890, max= 2258, avg=2129.11, stdev=138.17, samples=9 00:34:55.551 lat (usec) : 1000=0.04% 00:34:55.551 lat (msec) : 2=2.50%, 4=55.22%, 10=42.24% 00:34:55.551 cpu : usr=90.80%, sys=8.08%, ctx=8, majf=0, minf=9 00:34:55.551 IO depths : 1=0.1%, 2=1.0%, 4=63.4%, 8=35.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.552 complete : 0=0.0%, 4=99.6%, 8=0.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.552 issued rwts: total=10610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.552 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:55.552 filename1: (groupid=0, jobs=1): err= 0: pid=74915: Mon Jul 22 16:11:57 2024 00:34:55.552 read: IOPS=1813, BW=14.2MiB/s (14.9MB/s)(70.9MiB/5004msec) 00:34:55.552 slat (nsec): min=4309, max=52691, avg=14967.48, stdev=6384.99 00:34:55.552 clat (usec): min=1294, max=8584, avg=4360.84, stdev=994.19 00:34:55.552 lat (usec): min=1305, max=8617, avg=4375.81, stdev=993.82 00:34:55.552 clat percentiles (usec): 00:34:55.552 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2933], 20.00th=[ 3752], 00:34:55.552 | 30.00th=[ 3818], 40.00th=[ 4359], 50.00th=[ 4752], 60.00th=[ 4817], 00:34:55.552 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5145], 95.00th=[ 5735], 00:34:55.552 | 99.00th=[ 6783], 99.50th=[ 7701], 99.90th=[ 8160], 99.95th=[ 8586], 00:34:55.552 | 99.99th=[ 8586] 00:34:55.552 bw ( KiB/s): min=12512, max=17152, per=22.19%, avg=14352.00, stdev=1480.65, samples=9 00:34:55.552 iops : min= 1564, max= 2144, avg=1794.00, stdev=185.08, samples=9 00:34:55.552 lat (msec) : 2=2.46%, 4=32.04%, 10=65.50% 00:34:55.552 cpu : usr=89.81%, sys=8.85%, ctx=9, majf=0, minf=1 00:34:55.552 IO depths : 1=0.1%, 2=11.7%, 4=57.3%, 8=30.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.552 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.552 issued rwts: total=9076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.552 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:55.552 filename1: (groupid=0, jobs=1): err= 0: pid=74916: Mon Jul 22 16:11:57 2024 00:34:55.552 read: IOPS=2108, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5002msec) 00:34:55.552 slat (nsec): min=3648, max=49947, avg=16523.80, stdev=5085.49 00:34:55.552 clat (usec): min=1219, max=7501, avg=3751.21, stdev=1047.48 00:34:55.552 lat (usec): min=1230, max=7524, avg=3767.74, stdev=1046.60 00:34:55.552 clat percentiles (usec): 00:34:55.552 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2606], 00:34:55.552 | 30.00th=[ 2802], 40.00th=[ 3687], 50.00th=[ 3818], 60.00th=[ 4293], 00:34:55.552 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5080], 00:34:55.552 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 6456], 99.95th=[ 6456], 00:34:55.552 | 99.99th=[ 6587] 00:34:55.552 bw ( KiB/s): min=15072, max=18016, per=26.18%, avg=16933.33, stdev=961.96, samples=9 00:34:55.552 iops : min= 1884, max= 2252, avg=2116.67, stdev=120.25, samples=9 00:34:55.552 lat (msec) : 2=1.47%, 4=55.69%, 10=42.84% 00:34:55.552 cpu : usr=90.44%, sys=8.26%, ctx=7, majf=0, minf=9 00:34:55.552 IO depths : 1=0.1%, 2=1.4%, 4=63.2%, 8=35.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.552 complete : 0=0.0%, 4=99.5%, 8=0.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.552 issued rwts: total=10549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.552 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:55.552 00:34:55.552 Run status group 0 (all jobs): 00:34:55.552 READ: bw=63.2MiB/s (66.2MB/s), 14.2MiB/s-16.6MiB/s (14.9MB/s-17.4MB/s), io=316MiB (331MB), run=5001-5004msec 00:34:55.552 16:11:57 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:55.552 16:11:57 -- target/dif.sh@43 -- # local sub 00:34:55.552 16:11:57 -- target/dif.sh@45 -- # for sub in "$@" 00:34:55.552 16:11:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:55.552 16:11:57 -- target/dif.sh@36 -- # local sub_id=0 00:34:55.552 16:11:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:55.552 16:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 16:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.552 16:11:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:55.552 16:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 16:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.552 16:11:57 -- target/dif.sh@45 -- # for sub in "$@" 00:34:55.552 16:11:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:55.552 16:11:57 -- target/dif.sh@36 -- # local sub_id=1 00:34:55.552 16:11:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:55.552 16:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 16:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.552 16:11:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:55.552 16:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 ************************************ 00:34:55.552 END TEST fio_dif_rand_params 00:34:55.552 ************************************ 00:34:55.552 16:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.552 00:34:55.552 real 0m29.691s 00:34:55.552 user 2m57.900s 00:34:55.552 sys 0m11.184s 00:34:55.552 16:11:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 16:11:57 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:55.552 16:11:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:55.552 16:11:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 ************************************ 00:34:55.552 START TEST fio_dif_digest 00:34:55.552 ************************************ 00:34:55.552 16:11:57 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:34:55.552 16:11:57 -- target/dif.sh@123 -- # local NULL_DIF 00:34:55.552 16:11:57 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:55.552 16:11:57 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:55.552 16:11:57 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:55.552 16:11:57 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:55.552 16:11:57 -- target/dif.sh@127 -- # numjobs=3 00:34:55.552 16:11:57 -- target/dif.sh@127 -- # iodepth=3 00:34:55.552 16:11:57 -- target/dif.sh@127 -- # runtime=10 00:34:55.552 16:11:57 -- target/dif.sh@128 -- # hdgst=true 00:34:55.552 16:11:57 -- target/dif.sh@128 -- # ddgst=true 00:34:55.552 16:11:57 -- target/dif.sh@130 -- # create_subsystems 0 00:34:55.552 16:11:57 -- target/dif.sh@28 -- # local sub 00:34:55.552 16:11:57 -- target/dif.sh@30 -- # for sub in "$@" 00:34:55.552 16:11:57 -- target/dif.sh@31 -- # create_subsystem 0 00:34:55.552 16:11:57 -- target/dif.sh@18 -- # local sub_id=0 00:34:55.552 16:11:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:55.552 16:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 bdev_null0 00:34:55.552 16:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.552 16:11:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:55.552 16:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 16:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.552 16:11:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:55.552 16:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 16:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.552 16:11:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:55.552 16:11:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.552 16:11:57 -- common/autotest_common.sh@10 -- # set +x 00:34:55.552 [2024-07-22 16:11:57.880365] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.552 16:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.552 16:11:57 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:55.552 16:11:57 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:55.552 16:11:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:55.552 16:11:57 -- nvmf/common.sh@520 -- # config=() 00:34:55.552 16:11:57 -- nvmf/common.sh@520 -- # local subsystem config 00:34:55.552 16:11:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:55.552 16:11:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.552 16:11:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:55.552 { 00:34:55.552 "params": { 00:34:55.552 "name": "Nvme$subsystem", 00:34:55.552 "trtype": "$TEST_TRANSPORT", 00:34:55.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.552 "adrfam": "ipv4", 00:34:55.552 "trsvcid": "$NVMF_PORT", 00:34:55.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.552 "hdgst": ${hdgst:-false}, 00:34:55.552 "ddgst": ${ddgst:-false} 00:34:55.552 }, 00:34:55.552 "method": "bdev_nvme_attach_controller" 00:34:55.552 } 00:34:55.552 EOF 00:34:55.552 )") 00:34:55.552 16:11:57 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.552 16:11:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:55.552 16:11:57 -- target/dif.sh@82 -- # gen_fio_conf 00:34:55.552 16:11:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:55.552 16:11:57 -- target/dif.sh@54 -- # local file 00:34:55.552 16:11:57 -- target/dif.sh@56 -- # cat 00:34:55.552 16:11:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:55.552 16:11:57 -- nvmf/common.sh@542 -- # cat 00:34:55.552 16:11:57 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:55.552 16:11:57 -- common/autotest_common.sh@1320 -- # shift 00:34:55.552 16:11:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:55.552 16:11:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:55.552 16:11:57 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:55.552 16:11:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:55.552 16:11:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:55.552 16:11:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:55.552 16:11:57 -- target/dif.sh@72 -- # (( file <= files )) 00:34:55.552 16:11:57 -- nvmf/common.sh@544 -- # jq . 00:34:55.552 16:11:57 -- nvmf/common.sh@545 -- # IFS=, 00:34:55.552 16:11:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:55.552 "params": { 00:34:55.553 "name": "Nvme0", 00:34:55.553 "trtype": "tcp", 00:34:55.553 "traddr": "10.0.0.2", 00:34:55.553 "adrfam": "ipv4", 00:34:55.553 "trsvcid": "4420", 00:34:55.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:55.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:55.553 "hdgst": true, 00:34:55.553 "ddgst": true 00:34:55.553 }, 00:34:55.553 "method": "bdev_nvme_attach_controller" 00:34:55.553 }' 00:34:55.553 16:11:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:55.553 16:11:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:55.553 16:11:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:55.553 16:11:57 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:55.553 16:11:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:55.553 16:11:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:55.553 16:11:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:55.553 16:11:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:55.553 16:11:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:55.553 16:11:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.553 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:55.553 ... 00:34:55.553 fio-3.35 00:34:55.553 Starting 3 threads 00:34:55.811 [2024-07-22 16:11:58.432433] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:55.811 [2024-07-22 16:11:58.432722] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:05.803 00:35:05.803 filename0: (groupid=0, jobs=1): err= 0: pid=75025: Mon Jul 22 16:12:08 2024 00:35:05.803 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(269MiB/10001msec) 00:35:05.803 slat (nsec): min=3847, max=56164, avg=19636.66, stdev=7037.66 00:35:05.803 clat (usec): min=13043, max=22412, avg=13900.61, stdev=1352.61 00:35:05.803 lat (usec): min=13057, max=22429, avg=13920.24, stdev=1353.48 00:35:05.803 clat percentiles (usec): 00:35:05.803 | 1.00th=[13042], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:35:05.803 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:35:05.803 | 70.00th=[13435], 80.00th=[14091], 90.00th=[16188], 95.00th=[17433], 00:35:05.803 | 99.00th=[17957], 99.50th=[18220], 99.90th=[22414], 99.95th=[22414], 00:35:05.803 | 99.99th=[22414] 00:35:05.803 bw ( KiB/s): min=23040, max=29184, per=33.27%, avg=27486.32, stdev=2025.14, samples=19 00:35:05.803 iops : min= 180, max= 228, avg=214.74, stdev=15.82, samples=19 00:35:05.803 lat (msec) : 20=99.86%, 50=0.14% 00:35:05.803 cpu : usr=91.37%, sys=7.96%, ctx=20, majf=0, minf=0 00:35:05.803 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.803 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.803 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:05.803 filename0: (groupid=0, jobs=1): err= 0: pid=75026: Mon Jul 22 16:12:08 2024 00:35:05.803 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(269MiB/10003msec) 00:35:05.803 slat (nsec): min=8189, max=57103, avg=18875.12, stdev=6637.47 00:35:05.803 clat (usec): min=5024, max=22411, avg=13885.80, stdev=1381.52 00:35:05.803 lat (usec): min=5033, max=22428, avg=13904.68, stdev=1381.72 00:35:05.803 clat percentiles (usec): 00:35:05.803 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:35:05.803 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:35:05.803 | 70.00th=[13435], 80.00th=[14091], 90.00th=[16188], 95.00th=[17433], 00:35:05.803 | 99.00th=[17957], 99.50th=[18220], 99.90th=[22414], 99.95th=[22414], 00:35:05.803 | 99.99th=[22414] 00:35:05.803 bw ( KiB/s): min=23040, max=29184, per=33.27%, avg=27486.32, stdev=1925.61, samples=19 00:35:05.803 iops : min= 180, max= 228, avg=214.74, stdev=15.04, samples=19 00:35:05.803 lat (msec) : 10=0.14%, 20=99.72%, 50=0.14% 00:35:05.803 cpu : usr=92.56%, sys=6.83%, ctx=14, majf=0, minf=0 00:35:05.803 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.803 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.803 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:05.803 filename0: (groupid=0, jobs=1): err= 0: pid=75027: Mon Jul 22 16:12:08 2024 00:35:05.803 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(269MiB/10001msec) 00:35:05.803 slat (nsec): min=4517, max=95346, avg=18706.69, stdev=7352.74 00:35:05.803 clat (usec): min=13045, max=22413, avg=13902.84, stdev=1354.87 00:35:05.803 lat (usec): min=13060, max=22435, avg=13921.54, stdev=1355.46 00:35:05.803 clat percentiles (usec): 00:35:05.803 | 1.00th=[13042], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:35:05.803 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13304], 00:35:05.803 | 70.00th=[13435], 80.00th=[14091], 90.00th=[16188], 95.00th=[17433], 00:35:05.803 | 99.00th=[17957], 99.50th=[18220], 99.90th=[22414], 99.95th=[22414], 00:35:05.803 | 99.99th=[22414] 00:35:05.803 bw ( KiB/s): min=23040, max=29184, per=33.27%, avg=27486.32, stdev=2025.14, samples=19 00:35:05.803 iops : min= 180, max= 228, avg=214.74, stdev=15.82, samples=19 00:35:05.803 lat (msec) : 20=99.86%, 50=0.14% 00:35:05.803 cpu : usr=91.19%, sys=8.18%, ctx=10, majf=0, minf=0 00:35:05.803 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.803 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.803 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:05.803 00:35:05.803 Run status group 0 (all jobs): 00:35:05.803 READ: bw=80.7MiB/s (84.6MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=807MiB (846MB), run=10001-10003msec 00:35:06.061 16:12:08 -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:06.062 16:12:08 -- target/dif.sh@43 -- # local sub 00:35:06.062 16:12:08 -- target/dif.sh@45 -- # for sub in "$@" 00:35:06.062 16:12:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:06.062 16:12:08 -- target/dif.sh@36 -- # local sub_id=0 00:35:06.062 16:12:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:06.062 16:12:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.062 16:12:08 -- common/autotest_common.sh@10 -- # set +x 00:35:06.062 16:12:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.062 16:12:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:06.062 16:12:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.062 16:12:08 -- common/autotest_common.sh@10 -- # set +x 00:35:06.062 ************************************ 00:35:06.062 END TEST fio_dif_digest 00:35:06.062 ************************************ 00:35:06.062 16:12:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.062 00:35:06.062 real 0m10.899s 00:35:06.062 user 0m28.108s 00:35:06.062 sys 0m2.519s 00:35:06.062 16:12:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:06.062 16:12:08 -- common/autotest_common.sh@10 -- # set +x 00:35:06.062 16:12:08 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:06.062 16:12:08 -- target/dif.sh@147 -- # nvmftestfini 00:35:06.062 16:12:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:06.062 16:12:08 -- nvmf/common.sh@116 -- # sync 00:35:06.062 16:12:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:06.062 16:12:08 -- nvmf/common.sh@119 -- # set +e 00:35:06.062 16:12:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:06.062 16:12:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:06.062 rmmod nvme_tcp 00:35:06.062 rmmod nvme_fabrics 00:35:06.062 rmmod nvme_keyring 00:35:06.062 16:12:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:06.062 16:12:08 -- nvmf/common.sh@123 -- # set -e 00:35:06.062 16:12:08 -- nvmf/common.sh@124 -- # return 0 00:35:06.062 16:12:08 -- nvmf/common.sh@477 -- # '[' -n 74217 ']' 00:35:06.062 16:12:08 -- nvmf/common.sh@478 -- # killprocess 74217 00:35:06.062 16:12:08 -- common/autotest_common.sh@926 -- # '[' -z 74217 ']' 00:35:06.062 16:12:08 -- common/autotest_common.sh@930 -- # kill -0 74217 00:35:06.062 16:12:08 -- common/autotest_common.sh@931 -- # uname 00:35:06.062 16:12:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:06.062 16:12:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74217 00:35:06.062 16:12:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:06.062 16:12:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:06.062 16:12:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74217' 00:35:06.062 killing process with pid 74217 00:35:06.062 16:12:08 -- common/autotest_common.sh@945 -- # kill 74217 00:35:06.062 16:12:08 -- common/autotest_common.sh@950 -- # wait 74217 00:35:06.320 16:12:09 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:06.320 16:12:09 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:06.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:06.578 Waiting for block devices as requested 00:35:06.578 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:35:06.836 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:35:06.836 16:12:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:06.836 16:12:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:06.836 16:12:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:06.836 16:12:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:06.836 16:12:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.836 16:12:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:06.836 16:12:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.836 16:12:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:35:06.836 00:35:06.836 real 1m5.296s 00:35:06.836 user 4m45.561s 00:35:06.836 sys 0m22.591s 00:35:06.836 16:12:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:06.836 ************************************ 00:35:06.836 16:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.836 END TEST nvmf_dif 00:35:06.836 ************************************ 00:35:06.836 16:12:09 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:06.836 16:12:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:06.836 16:12:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:06.836 16:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.836 ************************************ 00:35:06.836 START TEST nvmf_abort_qd_sizes 00:35:06.836 ************************************ 00:35:06.836 16:12:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:06.836 * Looking for test storage... 00:35:06.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:06.836 16:12:09 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:06.836 16:12:09 -- nvmf/common.sh@7 -- # uname -s 00:35:06.836 16:12:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.836 16:12:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.836 16:12:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.836 16:12:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.836 16:12:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.836 16:12:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.836 16:12:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.836 16:12:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.836 16:12:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.836 16:12:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.836 16:12:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:35:06.836 16:12:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=3afe7664-1acb-4c6d-8a94-b57f48f48b78 00:35:06.836 16:12:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.836 16:12:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.837 16:12:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:06.837 16:12:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:06.837 16:12:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.837 16:12:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.837 16:12:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.837 16:12:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.837 16:12:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.837 16:12:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.837 16:12:09 -- paths/export.sh@5 -- # export PATH 00:35:06.837 16:12:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.837 16:12:09 -- nvmf/common.sh@46 -- # : 0 00:35:06.837 16:12:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:06.837 16:12:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:06.837 16:12:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:06.837 16:12:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.837 16:12:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.837 16:12:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:06.837 16:12:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:06.837 16:12:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:06.837 16:12:09 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:35:06.837 16:12:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:06.837 16:12:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.837 16:12:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:06.837 16:12:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:06.837 16:12:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:06.837 16:12:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.837 16:12:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:06.837 16:12:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.837 16:12:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:35:06.837 16:12:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:35:06.837 16:12:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:35:06.837 16:12:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:35:06.837 16:12:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:35:06.837 16:12:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:35:06.837 16:12:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.837 16:12:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.837 16:12:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:06.837 16:12:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:35:06.837 16:12:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:06.837 16:12:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:06.837 16:12:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:06.837 16:12:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.837 16:12:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:06.837 16:12:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:06.837 16:12:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:06.837 16:12:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:06.837 16:12:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:35:07.101 16:12:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:35:07.102 Cannot find device "nvmf_tgt_br" 00:35:07.102 16:12:09 -- nvmf/common.sh@154 -- # true 00:35:07.102 16:12:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:35:07.102 Cannot find device "nvmf_tgt_br2" 00:35:07.102 16:12:09 -- nvmf/common.sh@155 -- # true 00:35:07.102 16:12:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:35:07.102 16:12:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:35:07.102 Cannot find device "nvmf_tgt_br" 00:35:07.102 16:12:09 -- nvmf/common.sh@157 -- # true 00:35:07.102 16:12:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:35:07.102 Cannot find device "nvmf_tgt_br2" 00:35:07.102 16:12:09 -- nvmf/common.sh@158 -- # true 00:35:07.102 16:12:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:35:07.102 16:12:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:35:07.102 16:12:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:07.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:07.102 16:12:09 -- nvmf/common.sh@161 -- # true 00:35:07.102 16:12:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:07.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:07.102 16:12:09 -- nvmf/common.sh@162 -- # true 00:35:07.102 16:12:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:35:07.102 16:12:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:07.102 16:12:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:07.102 16:12:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:07.102 16:12:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:07.102 16:12:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:07.102 16:12:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:07.102 16:12:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:07.102 16:12:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:07.102 16:12:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:35:07.102 16:12:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:35:07.102 16:12:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:35:07.102 16:12:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:35:07.102 16:12:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:07.102 16:12:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:07.102 16:12:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:07.102 16:12:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:35:07.102 16:12:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:35:07.102 16:12:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:35:07.362 16:12:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:07.362 16:12:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:07.362 16:12:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:07.362 16:12:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:07.362 16:12:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:35:07.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:35:07.362 00:35:07.362 --- 10.0.0.2 ping statistics --- 00:35:07.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.362 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:35:07.362 16:12:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:35:07.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:07.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:35:07.362 00:35:07.362 --- 10.0.0.3 ping statistics --- 00:35:07.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.362 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:35:07.362 16:12:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:07.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:35:07.362 00:35:07.362 --- 10.0.0.1 ping statistics --- 00:35:07.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.362 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:35:07.362 16:12:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.362 16:12:10 -- nvmf/common.sh@421 -- # return 0 00:35:07.362 16:12:10 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:35:07.362 16:12:10 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:07.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:07.991 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:35:07.991 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:35:07.991 16:12:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.991 16:12:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:07.991 16:12:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:07.991 16:12:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.991 16:12:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:07.991 16:12:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:07.991 16:12:10 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:35:07.991 16:12:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:07.991 16:12:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:07.991 16:12:10 -- common/autotest_common.sh@10 -- # set +x 00:35:07.991 16:12:10 -- nvmf/common.sh@469 -- # nvmfpid=75611 00:35:07.991 16:12:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:07.991 16:12:10 -- nvmf/common.sh@470 -- # waitforlisten 75611 00:35:07.991 16:12:10 -- common/autotest_common.sh@819 -- # '[' -z 75611 ']' 00:35:07.991 16:12:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.991 16:12:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:07.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.991 16:12:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.991 16:12:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:07.991 16:12:10 -- common/autotest_common.sh@10 -- # set +x 00:35:08.249 [2024-07-22 16:12:10.863179] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:08.249 [2024-07-22 16:12:10.863283] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.249 [2024-07-22 16:12:10.996473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:08.249 [2024-07-22 16:12:11.066960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:08.249 [2024-07-22 16:12:11.067148] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.249 [2024-07-22 16:12:11.067171] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.249 [2024-07-22 16:12:11.067182] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.249 [2024-07-22 16:12:11.067300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.249 [2024-07-22 16:12:11.067411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:08.249 [2024-07-22 16:12:11.067469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:08.249 [2024-07-22 16:12:11.067474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.184 16:12:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:09.184 16:12:11 -- common/autotest_common.sh@852 -- # return 0 00:35:09.184 16:12:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:09.184 16:12:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:09.184 16:12:11 -- common/autotest_common.sh@10 -- # set +x 00:35:09.184 16:12:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:09.184 16:12:11 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:09.184 16:12:11 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:35:09.184 16:12:11 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:35:09.184 16:12:11 -- scripts/common.sh@311 -- # local bdf bdfs 00:35:09.184 16:12:11 -- scripts/common.sh@312 -- # local nvmes 00:35:09.184 16:12:11 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:35:09.184 16:12:11 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:35:09.184 16:12:11 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:35:09.184 16:12:11 -- scripts/common.sh@297 -- # local bdf= 00:35:09.184 16:12:11 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:35:09.184 16:12:11 -- scripts/common.sh@232 -- # local class 00:35:09.184 16:12:11 -- scripts/common.sh@233 -- # local subclass 00:35:09.184 16:12:11 -- scripts/common.sh@234 -- # local progif 00:35:09.184 16:12:11 -- scripts/common.sh@235 -- # printf %02x 1 00:35:09.184 16:12:11 -- scripts/common.sh@235 -- # class=01 00:35:09.184 16:12:11 -- scripts/common.sh@236 -- # printf %02x 8 00:35:09.184 16:12:11 -- scripts/common.sh@236 -- # subclass=08 00:35:09.184 16:12:11 -- scripts/common.sh@237 -- # printf %02x 2 00:35:09.184 16:12:11 -- scripts/common.sh@237 -- # progif=02 00:35:09.184 16:12:11 -- scripts/common.sh@239 -- # hash lspci 00:35:09.184 16:12:11 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:35:09.184 16:12:11 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:35:09.184 16:12:11 -- scripts/common.sh@242 -- # grep -i -- -p02 00:35:09.184 16:12:11 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:35:09.184 16:12:11 -- scripts/common.sh@244 -- # tr -d '"' 00:35:09.184 16:12:11 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:35:09.184 16:12:11 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:35:09.184 16:12:11 -- scripts/common.sh@15 -- # local i 00:35:09.184 16:12:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:35:09.184 16:12:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:35:09.184 16:12:11 -- scripts/common.sh@24 -- # return 0 00:35:09.184 16:12:11 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:35:09.184 16:12:11 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:35:09.184 16:12:11 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:35:09.184 16:12:11 -- scripts/common.sh@15 -- # local i 00:35:09.184 16:12:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:35:09.184 16:12:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:35:09.184 16:12:11 -- scripts/common.sh@24 -- # return 0 00:35:09.184 16:12:11 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:35:09.184 16:12:11 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:09.184 16:12:11 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:35:09.184 16:12:11 -- scripts/common.sh@322 -- # uname -s 00:35:09.184 16:12:11 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:09.184 16:12:11 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:09.184 16:12:11 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:09.184 16:12:11 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:35:09.184 16:12:11 -- scripts/common.sh@322 -- # uname -s 00:35:09.184 16:12:11 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:09.184 16:12:11 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:09.184 16:12:11 -- scripts/common.sh@327 -- # (( 2 )) 00:35:09.184 16:12:11 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:35:09.184 16:12:11 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:35:09.184 16:12:11 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:35:09.184 16:12:11 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:35:09.184 16:12:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:09.184 16:12:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:09.184 16:12:11 -- common/autotest_common.sh@10 -- # set +x 00:35:09.184 ************************************ 00:35:09.184 START TEST spdk_target_abort 00:35:09.184 ************************************ 00:35:09.184 16:12:11 -- common/autotest_common.sh@1104 -- # spdk_target 00:35:09.184 16:12:11 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:09.184 16:12:11 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:09.184 16:12:11 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:35:09.184 16:12:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.184 16:12:11 -- common/autotest_common.sh@10 -- # set +x 00:35:09.184 spdk_targetn1 00:35:09.184 16:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:09.184 16:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.184 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:35:09.184 [2024-07-22 16:12:12.004090] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:09.184 16:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:35:09.184 16:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.184 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:35:09.184 16:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:35:09.184 16:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.184 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:35:09.184 16:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:35:09.184 16:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:09.184 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:35:09.184 [2024-07-22 16:12:12.036296] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:09.184 16:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:09.184 16:12:12 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:12.463 Initializing NVMe Controllers 00:35:12.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:12.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:12.463 Initialization complete. Launching workers. 00:35:12.463 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9994, failed: 0 00:35:12.463 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1018, failed to submit 8976 00:35:12.463 success 845, unsuccess 173, failed 0 00:35:12.463 16:12:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.463 16:12:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:16.647 Initializing NVMe Controllers 00:35:16.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:16.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:16.647 Initialization complete. Launching workers. 00:35:16.647 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 7346, failed: 0 00:35:16.647 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1122, failed to submit 6224 00:35:16.647 success 335, unsuccess 787, failed 0 00:35:16.647 16:12:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:16.647 16:12:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:19.204 Initializing NVMe Controllers 00:35:19.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:19.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:19.204 Initialization complete. Launching workers. 00:35:19.204 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 26285, failed: 0 00:35:19.204 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2123, failed to submit 24162 00:35:19.204 success 304, unsuccess 1819, failed 0 00:35:19.204 16:12:21 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:19.204 16:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:19.204 16:12:21 -- common/autotest_common.sh@10 -- # set +x 00:35:19.204 16:12:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:19.204 16:12:21 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:19.204 16:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:19.204 16:12:21 -- common/autotest_common.sh@10 -- # set +x 00:35:19.204 16:12:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:19.204 16:12:22 -- target/abort_qd_sizes.sh@62 -- # killprocess 75611 00:35:19.204 16:12:22 -- common/autotest_common.sh@926 -- # '[' -z 75611 ']' 00:35:19.204 16:12:22 -- common/autotest_common.sh@930 -- # kill -0 75611 00:35:19.204 16:12:22 -- common/autotest_common.sh@931 -- # uname 00:35:19.204 16:12:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:19.204 16:12:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75611 00:35:19.462 16:12:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:19.462 killing process with pid 75611 00:35:19.462 16:12:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:19.462 16:12:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75611' 00:35:19.462 16:12:22 -- common/autotest_common.sh@945 -- # kill 75611 00:35:19.462 16:12:22 -- common/autotest_common.sh@950 -- # wait 75611 00:35:19.462 00:35:19.462 real 0m10.369s 00:35:19.462 user 0m41.621s 00:35:19.462 sys 0m2.343s 00:35:19.462 16:12:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:19.462 16:12:22 -- common/autotest_common.sh@10 -- # set +x 00:35:19.462 ************************************ 00:35:19.462 END TEST spdk_target_abort 00:35:19.462 ************************************ 00:35:19.719 16:12:22 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:35:19.719 16:12:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:19.719 16:12:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:19.719 16:12:22 -- common/autotest_common.sh@10 -- # set +x 00:35:19.719 ************************************ 00:35:19.719 START TEST kernel_target_abort 00:35:19.719 ************************************ 00:35:19.719 16:12:22 -- common/autotest_common.sh@1104 -- # kernel_target 00:35:19.719 16:12:22 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:35:19.719 16:12:22 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:35:19.720 16:12:22 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:35:19.720 16:12:22 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:35:19.720 16:12:22 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:35:19.720 16:12:22 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:19.720 16:12:22 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:19.720 16:12:22 -- nvmf/common.sh@627 -- # local block nvme 00:35:19.720 16:12:22 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:35:19.720 16:12:22 -- nvmf/common.sh@630 -- # modprobe nvmet 00:35:19.720 16:12:22 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:19.720 16:12:22 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:19.977 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:19.977 Waiting for block devices as requested 00:35:19.977 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:35:19.977 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:35:20.235 16:12:22 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:20.235 16:12:22 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:20.235 16:12:22 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:20.235 16:12:22 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:20.235 16:12:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:35:20.235 No valid GPT data, bailing 00:35:20.235 16:12:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:20.235 16:12:22 -- scripts/common.sh@393 -- # pt= 00:35:20.235 16:12:22 -- scripts/common.sh@394 -- # return 1 00:35:20.235 16:12:22 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:20.235 16:12:22 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:20.235 16:12:22 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:20.235 16:12:22 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:35:20.235 16:12:22 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:35:20.235 16:12:22 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:35:20.235 No valid GPT data, bailing 00:35:20.235 16:12:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:20.235 16:12:23 -- scripts/common.sh@393 -- # pt= 00:35:20.235 16:12:23 -- scripts/common.sh@394 -- # return 1 00:35:20.235 16:12:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:35:20.235 16:12:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:20.235 16:12:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:35:20.235 16:12:23 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:35:20.235 16:12:23 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:35:20.235 16:12:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:35:20.235 No valid GPT data, bailing 00:35:20.235 16:12:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:35:20.235 16:12:23 -- scripts/common.sh@393 -- # pt= 00:35:20.235 16:12:23 -- scripts/common.sh@394 -- # return 1 00:35:20.235 16:12:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:35:20.235 16:12:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:20.235 16:12:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:35:20.235 16:12:23 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:35:20.235 16:12:23 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:35:20.235 16:12:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:35:20.493 No valid GPT data, bailing 00:35:20.493 16:12:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:35:20.493 16:12:23 -- scripts/common.sh@393 -- # pt= 00:35:20.493 16:12:23 -- scripts/common.sh@394 -- # return 1 00:35:20.493 16:12:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:35:20.493 16:12:23 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:35:20.493 16:12:23 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:20.493 16:12:23 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:20.493 16:12:23 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:20.493 16:12:23 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:20.493 16:12:23 -- nvmf/common.sh@654 -- # echo 1 00:35:20.493 16:12:23 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:35:20.493 16:12:23 -- nvmf/common.sh@656 -- # echo 1 00:35:20.493 16:12:23 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:20.493 16:12:23 -- nvmf/common.sh@663 -- # echo tcp 00:35:20.493 16:12:23 -- nvmf/common.sh@664 -- # echo 4420 00:35:20.493 16:12:23 -- nvmf/common.sh@665 -- # echo ipv4 00:35:20.493 16:12:23 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:20.493 16:12:23 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3afe7664-1acb-4c6d-8a94-b57f48f48b78 --hostid=3afe7664-1acb-4c6d-8a94-b57f48f48b78 -a 10.0.0.1 -t tcp -s 4420 00:35:20.493 00:35:20.493 Discovery Log Number of Records 2, Generation counter 2 00:35:20.493 =====Discovery Log Entry 0====== 00:35:20.493 trtype: tcp 00:35:20.493 adrfam: ipv4 00:35:20.493 subtype: current discovery subsystem 00:35:20.493 treq: not specified, sq flow control disable supported 00:35:20.493 portid: 1 00:35:20.493 trsvcid: 4420 00:35:20.493 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:20.493 traddr: 10.0.0.1 00:35:20.493 eflags: none 00:35:20.493 sectype: none 00:35:20.493 =====Discovery Log Entry 1====== 00:35:20.493 trtype: tcp 00:35:20.493 adrfam: ipv4 00:35:20.493 subtype: nvme subsystem 00:35:20.493 treq: not specified, sq flow control disable supported 00:35:20.493 portid: 1 00:35:20.493 trsvcid: 4420 00:35:20.493 subnqn: kernel_target 00:35:20.493 traddr: 10.0.0.1 00:35:20.493 eflags: none 00:35:20.493 sectype: none 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:20.493 16:12:23 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:23.790 Initializing NVMe Controllers 00:35:23.790 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:23.790 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:23.790 Initialization complete. Launching workers. 00:35:23.790 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 34243, failed: 0 00:35:23.790 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 34243, failed to submit 0 00:35:23.790 success 0, unsuccess 34243, failed 0 00:35:23.790 16:12:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:23.790 16:12:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:27.073 Initializing NVMe Controllers 00:35:27.073 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:27.073 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:27.073 Initialization complete. Launching workers. 00:35:27.073 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65961, failed: 0 00:35:27.073 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27387, failed to submit 38574 00:35:27.073 success 0, unsuccess 27387, failed 0 00:35:27.073 16:12:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.073 16:12:29 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:30.357 Initializing NVMe Controllers 00:35:30.357 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:30.357 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:30.357 Initialization complete. Launching workers. 00:35:30.357 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 77542, failed: 0 00:35:30.357 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19410, failed to submit 58132 00:35:30.357 success 0, unsuccess 19410, failed 0 00:35:30.357 16:12:32 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:35:30.357 16:12:32 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:35:30.357 16:12:32 -- nvmf/common.sh@677 -- # echo 0 00:35:30.357 16:12:32 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:35:30.357 16:12:32 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:30.357 16:12:32 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:30.357 16:12:32 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:30.357 16:12:32 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:35:30.357 16:12:32 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:35:30.357 00:35:30.357 real 0m10.415s 00:35:30.357 user 0m5.857s 00:35:30.357 sys 0m1.923s 00:35:30.357 16:12:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:30.357 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:35:30.357 ************************************ 00:35:30.357 END TEST kernel_target_abort 00:35:30.357 ************************************ 00:35:30.357 16:12:32 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:35:30.357 16:12:32 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:35:30.357 16:12:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:30.357 16:12:32 -- nvmf/common.sh@116 -- # sync 00:35:30.357 16:12:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:30.357 16:12:32 -- nvmf/common.sh@119 -- # set +e 00:35:30.357 16:12:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:30.357 16:12:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:30.357 rmmod nvme_tcp 00:35:30.357 rmmod nvme_fabrics 00:35:30.357 rmmod nvme_keyring 00:35:30.357 16:12:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:30.357 16:12:32 -- nvmf/common.sh@123 -- # set -e 00:35:30.357 16:12:32 -- nvmf/common.sh@124 -- # return 0 00:35:30.357 16:12:32 -- nvmf/common.sh@477 -- # '[' -n 75611 ']' 00:35:30.357 16:12:32 -- nvmf/common.sh@478 -- # killprocess 75611 00:35:30.357 16:12:32 -- common/autotest_common.sh@926 -- # '[' -z 75611 ']' 00:35:30.357 16:12:32 -- common/autotest_common.sh@930 -- # kill -0 75611 00:35:30.357 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (75611) - No such process 00:35:30.357 Process with pid 75611 is not found 00:35:30.357 16:12:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 75611 is not found' 00:35:30.357 16:12:32 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:30.357 16:12:32 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:30.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:30.873 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:35:30.873 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:35:30.873 16:12:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:30.873 16:12:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:30.873 16:12:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:30.873 16:12:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:30.873 16:12:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.873 16:12:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:30.873 16:12:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.873 16:12:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:35:30.873 00:35:30.873 real 0m23.956s 00:35:30.873 user 0m48.775s 00:35:30.873 sys 0m5.390s 00:35:30.873 16:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:30.873 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:35:30.873 ************************************ 00:35:30.873 END TEST nvmf_abort_qd_sizes 00:35:30.873 ************************************ 00:35:30.873 16:12:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:30.873 16:12:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:30.873 16:12:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:30.873 16:12:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:30.873 16:12:33 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:35:30.873 16:12:33 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:35:30.873 16:12:33 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:35:30.873 16:12:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:30.873 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:35:30.873 16:12:33 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:35:30.873 16:12:33 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:35:30.873 16:12:33 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:35:30.873 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:35:32.247 INFO: APP EXITING 00:35:32.247 INFO: killing all VMs 00:35:32.247 INFO: killing vhost app 00:35:32.247 INFO: EXIT DONE 00:35:32.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:32.813 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:35:32.813 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:35:33.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:33.380 Cleaning 00:35:33.380 Removing: /var/run/dpdk/spdk0/config 00:35:33.380 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:33.380 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:33.380 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:33.380 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:33.380 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:33.380 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:33.380 Removing: /var/run/dpdk/spdk1/config 00:35:33.380 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:33.380 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:33.380 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:33.380 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:33.638 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:33.638 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:33.638 Removing: /var/run/dpdk/spdk2/config 00:35:33.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:33.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:33.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:33.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:33.638 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:33.638 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:33.638 Removing: /var/run/dpdk/spdk3/config 00:35:33.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:33.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:33.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:33.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:33.638 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:33.638 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:33.638 Removing: /var/run/dpdk/spdk4/config 00:35:33.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:33.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:33.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:33.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:33.638 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:33.638 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:33.638 Removing: /dev/shm/nvmf_trace.0 00:35:33.638 Removing: /dev/shm/spdk_tgt_trace.pid53776 00:35:33.638 Removing: /var/run/dpdk/spdk0 00:35:33.638 Removing: /var/run/dpdk/spdk1 00:35:33.638 Removing: /var/run/dpdk/spdk2 00:35:33.638 Removing: /var/run/dpdk/spdk3 00:35:33.638 Removing: /var/run/dpdk/spdk4 00:35:33.638 Removing: /var/run/dpdk/spdk_pid53632 00:35:33.638 Removing: /var/run/dpdk/spdk_pid53776 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54013 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54198 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54343 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54407 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54476 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54566 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54637 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54675 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54705 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54766 00:35:33.638 Removing: /var/run/dpdk/spdk_pid54843 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55286 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55338 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55389 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55405 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55467 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55484 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55550 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55566 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55612 00:35:33.638 Removing: /var/run/dpdk/spdk_pid55629 00:35:33.639 Removing: /var/run/dpdk/spdk_pid55675 00:35:33.639 Removing: /var/run/dpdk/spdk_pid55693 00:35:33.639 Removing: /var/run/dpdk/spdk_pid55809 00:35:33.639 Removing: /var/run/dpdk/spdk_pid55844 00:35:33.639 Removing: /var/run/dpdk/spdk_pid55918 00:35:33.639 Removing: /var/run/dpdk/spdk_pid55968 00:35:33.639 Removing: /var/run/dpdk/spdk_pid55994 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56051 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56072 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56101 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56126 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56155 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56169 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56209 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56223 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56258 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56277 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56306 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56326 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56360 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56380 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56413 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56428 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56463 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56482 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56517 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56531 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56565 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56585 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56616 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56641 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56670 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56684 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56724 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56740 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56779 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56795 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56824 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56849 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56879 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56902 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56939 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56962 00:35:33.639 Removing: /var/run/dpdk/spdk_pid56994 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57013 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57048 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57062 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57103 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57165 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57253 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57562 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57574 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57605 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57623 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57631 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57659 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57667 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57686 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57704 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57717 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57730 00:35:33.639 Removing: /var/run/dpdk/spdk_pid57748 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57766 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57774 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57798 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57810 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57824 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57842 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57860 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57872 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57903 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57915 00:35:33.897 Removing: /var/run/dpdk/spdk_pid57943 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58005 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58026 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58041 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58064 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58079 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58081 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58127 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58133 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58165 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58167 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58180 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58182 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58195 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58197 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58210 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58212 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58244 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58265 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58280 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58303 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58318 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58320 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58368 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58375 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58406 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58409 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58421 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58424 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58432 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58439 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58447 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58454 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58526 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58575 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58673 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58711 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58752 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58766 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58786 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58801 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58830 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58850 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58913 00:35:33.897 Removing: /var/run/dpdk/spdk_pid58927 00:35:33.898 Removing: /var/run/dpdk/spdk_pid58975 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59051 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59107 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59139 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59229 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59269 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59301 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59516 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59609 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59637 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59949 00:35:33.898 Removing: /var/run/dpdk/spdk_pid59993 00:35:33.898 Removing: /var/run/dpdk/spdk_pid60296 00:35:33.898 Removing: /var/run/dpdk/spdk_pid60711 00:35:33.898 Removing: /var/run/dpdk/spdk_pid60989 00:35:33.898 Removing: /var/run/dpdk/spdk_pid61716 00:35:33.898 Removing: /var/run/dpdk/spdk_pid62523 00:35:33.898 Removing: /var/run/dpdk/spdk_pid62635 00:35:33.898 Removing: /var/run/dpdk/spdk_pid62703 00:35:33.898 Removing: /var/run/dpdk/spdk_pid63967 00:35:33.898 Removing: /var/run/dpdk/spdk_pid64176 00:35:33.898 Removing: /var/run/dpdk/spdk_pid64497 00:35:33.898 Removing: /var/run/dpdk/spdk_pid64606 00:35:33.898 Removing: /var/run/dpdk/spdk_pid64732 00:35:33.898 Removing: /var/run/dpdk/spdk_pid64763 00:35:33.898 Removing: /var/run/dpdk/spdk_pid64787 00:35:33.898 Removing: /var/run/dpdk/spdk_pid64815 00:35:33.898 Removing: /var/run/dpdk/spdk_pid64916 00:35:33.898 Removing: /var/run/dpdk/spdk_pid65048 00:35:33.898 Removing: /var/run/dpdk/spdk_pid65212 00:35:33.898 Removing: /var/run/dpdk/spdk_pid65287 00:35:33.898 Removing: /var/run/dpdk/spdk_pid65673 00:35:33.898 Removing: /var/run/dpdk/spdk_pid66015 00:35:33.898 Removing: /var/run/dpdk/spdk_pid66022 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68206 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68214 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68494 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68508 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68527 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68558 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68563 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68652 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68654 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68762 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68770 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68878 00:35:33.898 Removing: /var/run/dpdk/spdk_pid68885 00:35:33.898 Removing: /var/run/dpdk/spdk_pid69293 00:35:33.898 Removing: /var/run/dpdk/spdk_pid69337 00:35:33.898 Removing: /var/run/dpdk/spdk_pid69446 00:35:33.898 Removing: /var/run/dpdk/spdk_pid69524 00:35:33.898 Removing: /var/run/dpdk/spdk_pid69827 00:35:33.898 Removing: /var/run/dpdk/spdk_pid70015 00:35:33.898 Removing: /var/run/dpdk/spdk_pid70402 00:35:33.898 Removing: /var/run/dpdk/spdk_pid70930 00:35:33.898 Removing: /var/run/dpdk/spdk_pid71368 00:35:33.898 Removing: /var/run/dpdk/spdk_pid71428 00:35:33.898 Removing: /var/run/dpdk/spdk_pid71481 00:35:34.156 Removing: /var/run/dpdk/spdk_pid71529 00:35:34.156 Removing: /var/run/dpdk/spdk_pid71653 00:35:34.156 Removing: /var/run/dpdk/spdk_pid71708 00:35:34.156 Removing: /var/run/dpdk/spdk_pid71768 00:35:34.156 Removing: /var/run/dpdk/spdk_pid71828 00:35:34.156 Removing: /var/run/dpdk/spdk_pid72149 00:35:34.156 Removing: /var/run/dpdk/spdk_pid73325 00:35:34.156 Removing: /var/run/dpdk/spdk_pid73466 00:35:34.156 Removing: /var/run/dpdk/spdk_pid73709 00:35:34.156 Removing: /var/run/dpdk/spdk_pid74275 00:35:34.156 Removing: /var/run/dpdk/spdk_pid74428 00:35:34.156 Removing: /var/run/dpdk/spdk_pid74585 00:35:34.156 Removing: /var/run/dpdk/spdk_pid74683 00:35:34.156 Removing: /var/run/dpdk/spdk_pid74903 00:35:34.156 Removing: /var/run/dpdk/spdk_pid75014 00:35:34.156 Removing: /var/run/dpdk/spdk_pid75662 00:35:34.156 Removing: /var/run/dpdk/spdk_pid75697 00:35:34.156 Removing: /var/run/dpdk/spdk_pid75731 00:35:34.156 Removing: /var/run/dpdk/spdk_pid75976 00:35:34.156 Removing: /var/run/dpdk/spdk_pid76011 00:35:34.156 Removing: /var/run/dpdk/spdk_pid76041 00:35:34.156 Clean 00:35:34.156 killing process with pid 47923 00:35:34.156 killing process with pid 47930 00:35:34.156 16:12:36 -- common/autotest_common.sh@1436 -- # return 0 00:35:34.156 16:12:36 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:35:34.156 16:12:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:34.156 16:12:36 -- common/autotest_common.sh@10 -- # set +x 00:35:34.156 16:12:36 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:35:34.156 16:12:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:34.156 16:12:36 -- common/autotest_common.sh@10 -- # set +x 00:35:34.156 16:12:36 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:34.156 16:12:36 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:34.156 16:12:36 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:34.156 16:12:36 -- spdk/autotest.sh@394 -- # hash lcov 00:35:34.156 16:12:36 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:34.156 16:12:36 -- spdk/autotest.sh@396 -- # hostname 00:35:34.156 16:12:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:34.414 geninfo: WARNING: invalid characters removed from testname! 00:36:06.540 16:13:06 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:07.914 16:13:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:11.242 16:13:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:13.774 16:13:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:17.059 16:13:19 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:19.590 16:13:22 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:22.925 16:13:25 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:22.925 16:13:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:22.925 16:13:25 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:22.925 16:13:25 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.925 16:13:25 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.925 16:13:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.925 16:13:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.925 16:13:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.925 16:13:25 -- paths/export.sh@5 -- $ export PATH 00:36:22.925 16:13:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.925 16:13:25 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:36:22.925 16:13:25 -- common/autobuild_common.sh@438 -- $ date +%s 00:36:22.925 16:13:25 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721664805.XXXXXX 00:36:22.926 16:13:25 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721664805.41hye6 00:36:22.926 16:13:25 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:36:22.926 16:13:25 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:36:22.926 16:13:25 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:36:22.926 16:13:25 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:36:22.926 16:13:25 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:36:22.926 16:13:25 -- common/autobuild_common.sh@454 -- $ get_config_params 00:36:22.926 16:13:25 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:36:22.926 16:13:25 -- common/autotest_common.sh@10 -- $ set +x 00:36:22.926 16:13:25 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:36:22.926 16:13:25 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:36:22.926 16:13:25 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:36:22.926 16:13:25 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:22.926 16:13:25 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:22.926 16:13:25 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:22.926 16:13:25 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:22.926 16:13:25 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:22.926 16:13:25 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:22.926 16:13:25 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:22.926 16:13:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:22.926 + [[ -n 5123 ]] 00:36:22.926 + sudo kill 5123 00:36:22.937 [Pipeline] } 00:36:22.955 [Pipeline] // timeout 00:36:22.961 [Pipeline] } 00:36:22.979 [Pipeline] // stage 00:36:22.985 [Pipeline] } 00:36:23.005 [Pipeline] // catchError 00:36:23.015 [Pipeline] stage 00:36:23.018 [Pipeline] { (Stop VM) 00:36:23.033 [Pipeline] sh 00:36:23.312 + vagrant halt 00:36:27.499 ==> default: Halting domain... 00:36:34.069 [Pipeline] sh 00:36:34.348 + vagrant destroy -f 00:36:38.530 ==> default: Removing domain... 00:36:38.542 [Pipeline] sh 00:36:38.822 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:36:38.830 [Pipeline] } 00:36:38.846 [Pipeline] // stage 00:36:38.852 [Pipeline] } 00:36:38.867 [Pipeline] // dir 00:36:38.871 [Pipeline] } 00:36:38.882 [Pipeline] // wrap 00:36:38.887 [Pipeline] } 00:36:38.898 [Pipeline] // catchError 00:36:38.905 [Pipeline] stage 00:36:38.906 [Pipeline] { (Epilogue) 00:36:38.918 [Pipeline] sh 00:36:39.195 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:45.772 [Pipeline] catchError 00:36:45.774 [Pipeline] { 00:36:45.786 [Pipeline] sh 00:36:46.062 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:46.320 Artifacts sizes are good 00:36:46.329 [Pipeline] } 00:36:46.347 [Pipeline] // catchError 00:36:46.358 [Pipeline] archiveArtifacts 00:36:46.365 Archiving artifacts 00:36:46.551 [Pipeline] cleanWs 00:36:46.563 [WS-CLEANUP] Deleting project workspace... 00:36:46.563 [WS-CLEANUP] Deferred wipeout is used... 00:36:46.569 [WS-CLEANUP] done 00:36:46.571 [Pipeline] } 00:36:46.586 [Pipeline] // stage 00:36:46.592 [Pipeline] } 00:36:46.610 [Pipeline] // node 00:36:46.615 [Pipeline] End of Pipeline 00:36:46.642 Finished: SUCCESS